Test Report: Docker_macOS 12230

                    
                      1c76ff5cea01605c2d985c010644edf1e689d34b:2021-08-12:19970
                    
                

Test fail (20/247)

x
+
TestDownloadOnly/v1.14.0/preload-exists (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
aaa_download_only_test.go:105: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.14.0/preload-exists (0.20s)

                                                
                                    
x
+
TestAddons (90.78s)

                                                
                                                
=== RUN   TestAddons
addons_test.go:75: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20210812170028-27878 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker  --addons=helm-tiller --addons=gcp-auth
addons_test.go:75: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p addons-20210812170028-27878 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker  --addons=helm-tiller --addons=gcp-auth: exit status 1 (1m17.641564041s)

                                                
                                                
-- stdout --
	* [addons-20210812170028-27878] minikube v1.22.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node addons-20210812170028-27878 in cluster addons-20210812170028-27878
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image quay.io/operator-framework/olm:v0.17.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	  - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	  - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	  - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	  - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	  - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	  - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	  - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	  - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	  - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	  - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	  - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	  - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 17:00:29.014379   28193 out.go:298] Setting OutFile to fd 1 ...
	I0812 17:00:29.014514   28193 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:00:29.014519   28193 out.go:311] Setting ErrFile to fd 2...
	I0812 17:00:29.014522   28193 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:00:29.014610   28193 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 17:00:29.014943   28193 out.go:305] Setting JSON to false
	I0812 17:00:29.033628   28193 start.go:111] hostinfo: {"hostname":"37310.local","uptime":10803,"bootTime":1628802026,"procs":339,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
	W0812 17:00:29.034200   28193 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0812 17:00:29.083470   28193 out.go:177] * [addons-20210812170028-27878] minikube v1.22.0 on Darwin 11.2.3
	I0812 17:00:29.083566   28193 notify.go:169] Checking for updates...
	I0812 17:00:29.109364   28193 out.go:177]   - MINIKUBE_LOCATION=12230
	I0812 17:00:29.135454   28193 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 17:00:29.161434   28193 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0812 17:00:29.187408   28193 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 17:00:29.187623   28193 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 17:00:29.280954   28193 docker.go:132] docker version: linux-20.10.6
	I0812 17:00:29.281090   28193 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 17:00:29.458759   28193 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-13 00:00:29.394839282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 17:00:29.485817   28193 out.go:177] * Using the docker driver based on user configuration
	I0812 17:00:29.485866   28193 start.go:278] selected driver: docker
	I0812 17:00:29.485887   28193 start.go:751] validating driver "docker" against <nil>
	I0812 17:00:29.485905   28193 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0812 17:00:29.489792   28193 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 17:00:29.666624   28193 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-13 00:00:29.602869081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 17:00:29.666722   28193 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0812 17:00:29.666849   28193 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 17:00:29.666865   28193 cni.go:93] Creating CNI manager for ""
	I0812 17:00:29.666873   28193 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0812 17:00:29.666882   28193 start_flags.go:277] config:
	{Name:addons-20210812170028-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812170028-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 17:00:29.693976   28193 out.go:177] * Starting control plane node addons-20210812170028-27878 in cluster addons-20210812170028-27878
	I0812 17:00:29.694039   28193 cache.go:117] Beginning downloading kic base image for docker with docker
	I0812 17:00:29.720514   28193 out.go:177] * Pulling base image ...
	I0812 17:00:29.720595   28193 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 17:00:29.720682   28193 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 17:00:29.720690   28193 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0812 17:00:29.720713   28193 cache.go:56] Caching tarball of preloaded images
	I0812 17:00:29.720932   28193 preload.go:173] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0812 17:00:29.720956   28193 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0812 17:00:29.723686   28193 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/config.json ...
	I0812 17:00:29.723757   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/config.json: {Name:mkdc05142cb0bb6fae0563d7e63da9ee00b46db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:00:29.832856   28193 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 17:00:29.832883   28193 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0812 17:00:29.832894   28193 cache.go:205] Successfully downloaded all kic artifacts
	I0812 17:00:29.832944   28193 start.go:313] acquiring machines lock for addons-20210812170028-27878: {Name:mkb16942b0f2ad4c87167bf358747e46152cdd44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 17:00:29.833092   28193 start.go:317] acquired machines lock for "addons-20210812170028-27878" in 137.223µs
	I0812 17:00:29.833124   28193 start.go:89] Provisioning new machine with config: &{Name:addons-20210812170028-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812170028-27878 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 17:00:29.833189   28193 start.go:126] createHost starting for "" (driver="docker")
	I0812 17:00:29.881674   28193 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0812 17:00:29.882028   28193 start.go:160] libmachine.API.Create for "addons-20210812170028-27878" (driver="docker")
	I0812 17:00:29.882072   28193 client.go:168] LocalClient.Create starting
	I0812 17:00:29.882342   28193 main.go:130] libmachine: Creating CA: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0812 17:00:30.024234   28193 main.go:130] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0812 17:00:30.215705   28193 cli_runner.go:115] Run: docker network inspect addons-20210812170028-27878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0812 17:00:30.324442   28193 cli_runner.go:162] docker network inspect addons-20210812170028-27878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0812 17:00:30.324555   28193 network_create.go:255] running [docker network inspect addons-20210812170028-27878] to gather additional debugging logs...
	I0812 17:00:30.324581   28193 cli_runner.go:115] Run: docker network inspect addons-20210812170028-27878
	W0812 17:00:30.435202   28193 cli_runner.go:162] docker network inspect addons-20210812170028-27878 returned with exit code 1
	I0812 17:00:30.435227   28193 network_create.go:258] error running [docker network inspect addons-20210812170028-27878]: docker network inspect addons-20210812170028-27878: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210812170028-27878
	I0812 17:00:30.435247   28193 network_create.go:260] output of [docker network inspect addons-20210812170028-27878]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210812170028-27878
	
	** /stderr **
	I0812 17:00:30.435337   28193 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0812 17:00:30.546658   28193 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e430] misses:0}
	I0812 17:00:30.546699   28193 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 17:00:30.546719   28193 network_create.go:106] attempt to create docker network addons-20210812170028-27878 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0812 17:00:30.546813   28193 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210812170028-27878
	I0812 17:00:34.495037   28193 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210812170028-27878: (3.948051562s)
	I0812 17:00:34.495066   28193 network_create.go:90] docker network addons-20210812170028-27878 192.168.49.0/24 created
	I0812 17:00:34.495081   28193 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210812170028-27878" container
	I0812 17:00:34.495206   28193 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0812 17:00:34.603876   28193 cli_runner.go:115] Run: docker volume create addons-20210812170028-27878 --label name.minikube.sigs.k8s.io=addons-20210812170028-27878 --label created_by.minikube.sigs.k8s.io=true
	I0812 17:00:34.715597   28193 oci.go:102] Successfully created a docker volume addons-20210812170028-27878
	I0812 17:00:34.715735   28193 cli_runner.go:115] Run: docker run --rm --name addons-20210812170028-27878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210812170028-27878 --entrypoint /usr/bin/test -v addons-20210812170028-27878:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0812 17:00:35.427269   28193 oci.go:106] Successfully prepared a docker volume addons-20210812170028-27878
	I0812 17:00:35.427351   28193 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 17:00:35.427369   28193 kic.go:179] Starting extracting preloaded images to volume ...
	I0812 17:00:35.427414   28193 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0812 17:00:35.427481   28193 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210812170028-27878:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0812 17:00:35.618789   28193 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210812170028-27878 --name addons-20210812170028-27878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210812170028-27878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210812170028-27878 --network addons-20210812170028-27878 --ip 192.168.49.2 --volume addons-20210812170028-27878:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0812 17:00:41.224641   28193 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210812170028-27878:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (5.796932518s)
	I0812 17:00:41.224671   28193 kic.go:188] duration metric: took 5.797136 seconds to extract preloaded images to volume
	I0812 17:00:44.102157   28193 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210812170028-27878 --name addons-20210812170028-27878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210812170028-27878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210812170028-27878 --network addons-20210812170028-27878 --ip 192.168.49.2 --volume addons-20210812170028-27878:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79: (8.483054878s)
	I0812 17:00:44.102267   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Running}}
	I0812 17:00:44.218337   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:00:44.335336   28193 cli_runner.go:115] Run: docker exec addons-20210812170028-27878 stat /var/lib/dpkg/alternatives/iptables
	I0812 17:00:44.501374   28193 oci.go:278] the created container "addons-20210812170028-27878" has a running status.
	I0812 17:00:44.501418   28193 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812170028-27878/id_rsa...
	I0812 17:00:44.567719   28193 kic_runner.go:188] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812170028-27878/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0812 17:00:44.730378   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:00:44.842266   28193 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0812 17:00:44.842284   28193 kic_runner.go:115] Args: [docker exec --privileged addons-20210812170028-27878 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0812 17:00:45.008279   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:00:45.122333   28193 machine.go:88] provisioning docker machine ...
	I0812 17:00:45.122374   28193 ubuntu.go:169] provisioning hostname "addons-20210812170028-27878"
	I0812 17:00:45.122488   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:00:45.233759   28193 main.go:130] libmachine: Using SSH client type: native
	I0812 17:00:45.233985   28193 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60386 <nil> <nil>}
	I0812 17:00:45.233997   28193 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210812170028-27878 && echo "addons-20210812170028-27878" | sudo tee /etc/hostname
	I0812 17:00:45.235361   28193 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0812 17:00:48.380844   28193 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210812170028-27878
	
	I0812 17:00:48.380941   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:00:48.493424   28193 main.go:130] libmachine: Using SSH client type: native
	I0812 17:00:48.493583   28193 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60386 <nil> <nil>}
	I0812 17:00:48.493598   28193 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210812170028-27878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210812170028-27878/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210812170028-27878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 17:00:48.609787   28193 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0812 17:00:48.609805   28193 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0812 17:00:48.609828   28193 ubuntu.go:177] setting up certificates
	I0812 17:00:48.609834   28193 provision.go:83] configureAuth start
	I0812 17:00:48.609910   28193 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210812170028-27878
	I0812 17:00:48.732030   28193 provision.go:137] copyHostCerts
	I0812 17:00:48.732129   28193 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0812 17:00:48.732358   28193 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0812 17:00:48.732527   28193 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0812 17:00:48.732647   28193 provision.go:111] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.addons-20210812170028-27878 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210812170028-27878]
	I0812 17:00:48.838615   28193 provision.go:171] copyRemoteCerts
	I0812 17:00:48.838677   28193 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 17:00:48.838766   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:00:48.953681   28193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60386 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812170028-27878/id_rsa Username:docker}
	I0812 17:00:49.038110   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 17:00:49.056520   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 17:00:49.073261   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0812 17:00:49.089554   28193 provision.go:86] duration metric: configureAuth took 479.693706ms
	I0812 17:00:49.089567   28193 ubuntu.go:193] setting minikube options for container-runtime
	I0812 17:00:49.089809   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:00:49.200119   28193 main.go:130] libmachine: Using SSH client type: native
	I0812 17:00:49.200278   28193 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60386 <nil> <nil>}
	I0812 17:00:49.200286   28193 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0812 17:00:49.317823   28193 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0812 17:00:49.317836   28193 ubuntu.go:71] root file system type: overlay
	I0812 17:00:49.317956   28193 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0812 17:00:49.318047   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:00:49.430585   28193 main.go:130] libmachine: Using SSH client type: native
	I0812 17:00:49.430765   28193 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60386 <nil> <nil>}
	I0812 17:00:49.430818   28193 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0812 17:00:49.554504   28193 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0812 17:00:49.554615   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:00:49.667944   28193 main.go:130] libmachine: Using SSH client type: native
	I0812 17:00:49.668098   28193 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60386 <nil> <nil>}
	I0812 17:00:49.668112   28193 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0812 17:01:11.959441   28193 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:54:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-13 00:00:49.551672056 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0812 17:01:11.959467   28193 machine.go:91] provisioned docker machine in 26.836365754s
	I0812 17:01:11.959475   28193 client.go:171] LocalClient.Create took 42.076222988s
	I0812 17:01:11.959493   28193 start.go:168] duration metric: libmachine.API.Create for "addons-20210812170028-27878" took 42.076296103s
	I0812 17:01:11.959504   28193 start.go:267] post-start starting for "addons-20210812170028-27878" (driver="docker")
	I0812 17:01:11.959508   28193 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 17:01:11.959581   28193 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 17:01:11.959668   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:12.072633   28193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60386 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812170028-27878/id_rsa Username:docker}
	I0812 17:01:12.158167   28193 ssh_runner.go:149] Run: cat /etc/os-release
	I0812 17:01:12.161691   28193 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0812 17:01:12.161706   28193 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0812 17:01:12.161716   28193 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0812 17:01:12.161723   28193 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0812 17:01:12.161733   28193 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0812 17:01:12.161858   28193 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0812 17:01:12.161901   28193 start.go:270] post-start completed in 202.386731ms
	I0812 17:01:12.162439   28193 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210812170028-27878
	I0812 17:01:12.273917   28193 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/config.json ...
	I0812 17:01:12.274328   28193 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 17:01:12.274389   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:12.383171   28193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60386 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812170028-27878/id_rsa Username:docker}
	I0812 17:01:12.475623   28193 start.go:129] duration metric: createHost completed in 42.641236669s
	I0812 17:01:12.475640   28193 start.go:80] releasing machines lock for "addons-20210812170028-27878", held for 42.641352198s
	I0812 17:01:12.475750   28193 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210812170028-27878
	I0812 17:01:12.586692   28193 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0812 17:01:12.586698   28193 ssh_runner.go:149] Run: systemctl --version
	I0812 17:01:12.586782   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:12.586779   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:12.708003   28193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60386 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812170028-27878/id_rsa Username:docker}
	I0812 17:01:12.708190   28193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60386 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812170028-27878/id_rsa Username:docker}
	I0812 17:01:12.956723   28193 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0812 17:01:12.965911   28193 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0812 17:01:12.974937   28193 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0812 17:01:12.974997   28193 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0812 17:01:12.983830   28193 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 17:01:12.995559   28193 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0812 17:01:13.053541   28193 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0812 17:01:13.110194   28193 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0812 17:01:13.119624   28193 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0812 17:01:13.174646   28193 ssh_runner.go:149] Run: sudo systemctl start docker
	I0812 17:01:13.184011   28193 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0812 17:01:13.333176   28193 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0812 17:01:13.404613   28193 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0812 17:01:13.404778   28193 cli_runner.go:115] Run: docker exec -t addons-20210812170028-27878 dig +short host.docker.internal
	I0812 17:01:13.584083   28193 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0812 17:01:13.584178   28193 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0812 17:01:13.588620   28193 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 17:01:13.598528   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:13.710716   28193 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 17:01:13.710803   28193 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 17:01:13.744257   28193 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0812 17:01:13.744271   28193 docker.go:466] Images already preloaded, skipping extraction
	I0812 17:01:13.744379   28193 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 17:01:13.777638   28193 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0812 17:01:13.777655   28193 cache_images.go:74] Images are preloaded, skipping loading
	I0812 17:01:13.777751   28193 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0812 17:01:13.970156   28193 cni.go:93] Creating CNI manager for ""
	I0812 17:01:13.970171   28193 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0812 17:01:13.970183   28193 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0812 17:01:13.970200   28193 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210812170028-27878 NodeName:addons-20210812170028-27878 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0812 17:01:13.970310   28193 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20210812170028-27878"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 17:01:13.970392   28193 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20210812170028-27878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210812170028-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0812 17:01:13.970459   28193 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0812 17:01:13.978605   28193 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 17:01:13.978661   28193 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 17:01:13.986126   28193 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0812 17:01:14.025345   28193 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 17:01:14.037077   28193 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0812 17:01:14.048623   28193 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0812 17:01:14.052695   28193 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 17:01:14.061736   28193 certs.go:52] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878 for IP: 192.168.49.2
	I0812 17:01:14.061781   28193 certs.go:183] generating minikubeCA CA: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0812 17:01:14.155933   28193 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt ...
	I0812 17:01:14.155943   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt: {Name:mkc3b4ea3a6e975b9790f97e2e206b53c7dcfe05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.156857   28193 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key ...
	I0812 17:01:14.156867   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key: {Name:mke12b982c00e47abaf0d24e46eb3f3970fbc789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.157060   28193 certs.go:183] generating proxyClientCA CA: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0812 17:01:14.290274   28193 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt ...
	I0812 17:01:14.290289   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt: {Name:mkd5079cec74dae4ee0de5ac7bdb282c02c43d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.290525   28193 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key ...
	I0812 17:01:14.290546   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key: {Name:mk3946e801691b1deb19051ab045ce93e7472558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.290775   28193 certs.go:294] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/client.key
	I0812 17:01:14.290784   28193 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/client.crt with IP's: []
	I0812 17:01:14.345517   28193 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/client.crt ...
	I0812 17:01:14.345525   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/client.crt: {Name:mk699fcfd921308e9cdccc7bc7dfb19397cfa8ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.345721   28193 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/client.key ...
	I0812 17:01:14.345729   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/client.key: {Name:mke99b5edc19d9eaade64dcce09b0911ee6d9d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.345897   28193 certs.go:294] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.key.dd3b5fb2
	I0812 17:01:14.345903   28193 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0812 17:01:14.506689   28193 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.crt.dd3b5fb2 ...
	I0812 17:01:14.506704   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.crt.dd3b5fb2: {Name:mk729e9bc08a3c5eeb4ce5c8e5284932709fd5c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.507000   28193 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.key.dd3b5fb2 ...
	I0812 17:01:14.507009   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.key.dd3b5fb2: {Name:mk1270bb6a6f02b8718469148bbbf60f13796f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.507220   28193 certs.go:305] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.crt
	I0812 17:01:14.507361   28193 certs.go:309] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.key
	I0812 17:01:14.507546   28193 certs.go:294] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/proxy-client.key
	I0812 17:01:14.507552   28193 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/proxy-client.crt with IP's: []
	I0812 17:01:14.586320   28193 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/proxy-client.crt ...
	I0812 17:01:14.586331   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/proxy-client.crt: {Name:mkc366c47c55e9ce1fb97bcf5488d91710ef47d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.586560   28193 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/proxy-client.key ...
	I0812 17:01:14.586569   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/proxy-client.key: {Name:mk918217a636a1dde3a6bcb9054a2f5f3cc11bfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:14.586943   28193 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 17:01:14.586998   28193 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0812 17:01:14.587035   28193 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0812 17:01:14.587068   28193 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0812 17:01:14.587860   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0812 17:01:14.605372   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 17:01:14.620531   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 17:01:14.636113   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812170028-27878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 17:01:14.651580   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 17:01:14.667471   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0812 17:01:14.682960   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 17:01:14.698395   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0812 17:01:14.713834   28193 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 17:01:14.730493   28193 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 17:01:14.742300   28193 ssh_runner.go:149] Run: openssl version
	I0812 17:01:14.749868   28193 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 17:01:14.758006   28193 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 17:01:14.762025   28193 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 00:01 /usr/share/ca-certificates/minikubeCA.pem
	I0812 17:01:14.762079   28193 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 17:01:14.767561   28193 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 17:01:14.774809   28193 kubeadm.go:390] StartCluster: {Name:addons-20210812170028-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812170028-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 17:01:14.774932   28193 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 17:01:14.806897   28193 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 17:01:14.814301   28193 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 17:01:14.821170   28193 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0812 17:01:14.821219   28193 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 17:01:14.827979   28193 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 17:01:14.828000   28193 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0812 17:01:15.532553   28193 out.go:204]   - Generating certificates and keys ...
	I0812 17:01:17.511544   28193 out.go:204]   - Booting up control plane ...
	I0812 17:01:32.053759   28193 out.go:204]   - Configuring RBAC rules ...
	I0812 17:01:32.438066   28193 cni.go:93] Creating CNI manager for ""
	I0812 17:01:32.438080   28193 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0812 17:01:32.438106   28193 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 17:01:32.438186   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=addons-20210812170028-27878 minikube.k8s.io/updated_at=2021_08_12T17_01_32_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:32.438194   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:32.469105   28193 ops.go:34] apiserver oom_adj: -16
	I0812 17:01:32.558927   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:33.167727   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:33.666521   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:34.166496   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:34.666488   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:35.166433   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:35.669506   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:36.165182   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:36.668019   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:37.164267   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:37.662785   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:38.163294   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:38.661451   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:39.162327   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:39.660180   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:40.160361   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:40.661856   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:41.160177   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:41.660416   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:42.161672   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:42.660194   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:43.160497   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:43.661652   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:44.161331   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:44.660199   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:45.160760   28193 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 17:01:45.233331   28193 kubeadm.go:985] duration metric: took 12.79486738s to wait for elevateKubeSystemPrivileges.
	I0812 17:01:45.233347   28193 kubeadm.go:392] StartCluster complete in 30.457692695s
	I0812 17:01:45.233361   28193 settings.go:142] acquiring lock: {Name:mk3e1d203e6439798c8d384e90b2bc232b4914ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:45.233535   28193 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 17:01:45.233799   28193 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mka81e290e52453cdddcec52ed4fa17d888b133f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 17:01:45.754727   28193 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210812170028-27878" rescaled to 1
	I0812 17:01:45.754766   28193 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 17:01:45.754774   28193 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 17:01:45.754798   28193 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver helm-tiller gcp-auth]
	I0812 17:01:45.781423   28193 out.go:177] * Verifying Kubernetes components...
	I0812 17:01:45.781476   28193 addons.go:59] Setting volumesnapshots=true in profile "addons-20210812170028-27878"
	I0812 17:01:45.781483   28193 addons.go:59] Setting olm=true in profile "addons-20210812170028-27878"
	I0812 17:01:45.781491   28193 addons.go:59] Setting gcp-auth=true in profile "addons-20210812170028-27878"
	I0812 17:01:45.781497   28193 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 17:01:45.781502   28193 addons.go:135] Setting addon olm=true in "addons-20210812170028-27878"
	I0812 17:01:45.781504   28193 addons.go:135] Setting addon volumesnapshots=true in "addons-20210812170028-27878"
	I0812 17:01:45.781512   28193 mustload.go:65] Loading cluster: addons-20210812170028-27878
	I0812 17:01:45.781513   28193 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210812170028-27878"
	I0812 17:01:45.781533   28193 host.go:66] Checking if "addons-20210812170028-27878" exists ...
	I0812 17:01:45.781492   28193 addons.go:59] Setting default-storageclass=true in profile "addons-20210812170028-27878"
	I0812 17:01:45.781542   28193 host.go:66] Checking if "addons-20210812170028-27878" exists ...
	I0812 17:01:45.781543   28193 addons.go:59] Setting storage-provisioner=true in profile "addons-20210812170028-27878"
	I0812 17:01:45.781562   28193 addons.go:59] Setting metrics-server=true in profile "addons-20210812170028-27878"
	I0812 17:01:45.781588   28193 addons.go:135] Setting addon metrics-server=true in "addons-20210812170028-27878"
	I0812 17:01:45.781591   28193 addons.go:135] Setting addon storage-provisioner=true in "addons-20210812170028-27878"
	I0812 17:01:45.781560   28193 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210812170028-27878"
	W0812 17:01:45.781599   28193 addons.go:147] addon storage-provisioner should already be in state true
	I0812 17:01:45.781591   28193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210812170028-27878"
	I0812 17:01:45.781476   28193 addons.go:59] Setting helm-tiller=true in profile "addons-20210812170028-27878"
	I0812 17:01:45.781653   28193 host.go:66] Checking if "addons-20210812170028-27878" exists ...
	I0812 17:01:45.781656   28193 host.go:66] Checking if "addons-20210812170028-27878" exists ...
	I0812 17:01:45.781669   28193 addons.go:135] Setting addon helm-tiller=true in "addons-20210812170028-27878"
	I0812 17:01:45.781615   28193 addons.go:59] Setting registry=true in profile "addons-20210812170028-27878"
	I0812 17:01:45.781734   28193 host.go:66] Checking if "addons-20210812170028-27878" exists ...
	I0812 17:01:45.781716   28193 host.go:66] Checking if "addons-20210812170028-27878" exists ...
	I0812 17:01:45.781743   28193 addons.go:135] Setting addon registry=true in "addons-20210812170028-27878"
	I0812 17:01:45.781829   28193 host.go:66] Checking if "addons-20210812170028-27878" exists ...
	I0812 17:01:45.782078   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:45.782100   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:45.782221   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:45.784098   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:45.786787   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:45.786877   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:45.819739   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:45.819769   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:45.819842   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:45.880147   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:45.880323   28193 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 17:01:46.092785   28193 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0812 17:01:46.118754   28193 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 17:01:46.119004   28193 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 17:01:46.218783   28193 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0812 17:01:46.121325   28193 addons.go:135] Setting addon default-storageclass=true in "addons-20210812170028-27878"
	I0812 17:01:46.218819   28193 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0812 17:01:46.218830   28193 addons.go:147] addon default-storageclass should already be in state true
	I0812 17:01:46.123039   28193 host.go:66] Checking if "addons-20210812170028-27878" exists ...
	I0812 17:01:46.133244   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:46.281613   28193 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0812 17:01:46.176816   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0812 17:01:46.197586   28193 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0812 17:01:46.218916   28193 host.go:66] Checking if "addons-20210812170028-27878" exists ...
	I0812 17:01:46.281794   28193 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 17:01:46.281807   28193 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0812 17:01:46.281816   28193 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0812 17:01:46.281819   28193 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0812 17:01:46.281825   28193 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0812 17:01:46.281828   28193 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0812 17:01:46.218975   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:46.219302   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:46.236664   28193 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0812 17:01:46.244775   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0812 17:01:46.284252   28193 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0812 17:01:46.284317   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:46.284171   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:46.284203   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:46.288096   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:46.342881   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0812 17:01:46.305022   28193 cli_runner.go:115] Run: docker container inspect addons-20210812170028-27878 --format={{.State.Status}}
	I0812 17:01:46.376720   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0812 17:01:46.381415   28193 node_ready.go:35] waiting up to 6m0s for node "addons-20210812170028-27878" to be "Ready" ...
	I0812 17:01:46.402768   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0812 17:01:46.413135   28193 node_ready.go:49] node "addons-20210812170028-27878" has status "Ready":"True"
	I0812 17:01:46.446879   28193 node_ready.go:38] duration metric: took 44.093531ms waiting for node "addons-20210812170028-27878" to be "Ready" ...
	I0812 17:01:46.446888   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0812 17:01:46.446903   28193 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 17:01:46.482656   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0812 17:01:46.480847   28193 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-flqmm" in "kube-system" namespace to be "Ready" ...
	I0812 17:01:46.508859   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0812 17:01:46.535736   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0812 17:01:46.577633   28193 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0812 17:01:46.567741   28193 start.go:736] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0812 17:01:46.577712   28193 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0812 17:01:46.577724   28193 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0812 17:01:46.577860   28193 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812170028-27878
	I0812 17:01:46.602310   28193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60386 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812170028-27878/id_rsa Username:docker}
	I0812 17:01:46.607805   28193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60386 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812170028-27878/id_rsa Username:docker}
	I0812 17:01:46.614947   28193 main.go:116] stdlog: detect_unix.go:31 open /proc/sys/kernel/osrelease: no such file or directory

                                                
                                                
** /stderr **
addons_test.go:77: out/minikube-darwin-amd64 start -p addons-20210812170028-27878 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker  --addons=helm-tiller --addons=gcp-auth failed: exit status 1
helpers_test.go:176: Cleaning up "addons-20210812170028-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p addons-20210812170028-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p addons-20210812170028-27878: (13.119231567s)
--- FAIL: TestAddons (90.78s)

                                                
                                    
x
+
TestCertOptions (64.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20210812180026-27878 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20210812180026-27878 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (46.9124895s)
cert_options_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20210812180026-27878 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210812180026-27878 config view
cert_options_test.go:78: apiserver server port incorrect. Output of 'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters:\n\t- cluster:\n\t    certificate-authority: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Thu, 12 Aug 2021 18:01:12 PDT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.22.0\n\t      name: cluster_info\n\t    server: https://localhost:49646\n\t  name: cert-options-20210812180026-27878\n\tcontexts:\n\t- context:\n\t    cluster: cert-options-20210812180026-27878\n\t    extensions:\n\t    - extension:\n\t        last-update: Thu, 12 Aug 2021 18:01:12 PDT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.22.0\n\t      name: context_info\n\t    namespace: default\n\t    user: cert-options-20210812180026-27878\n\t  name: cert-options-20210812180026-27878\n\tcurrent-context: cert-options-20210812180026-2
7878\n\tkind: Config\n\tpreferences: {}\n\tusers:\n\t- name: cert-options-20210812180026-27878\n\t  user:\n\t    client-certificate: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/client.crt\n\t    client-key: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/client.key\n\n-- /stdout --"
cert_options_test.go:81: *** TestCertOptions FAILED at 2021-08-12 18:01:14.37192 -0700 PDT m=+3700.575091251
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect cert-options-20210812180026-27878
helpers_test.go:236: (dbg) docker inspect cert-options-20210812180026-27878:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e69308f7031e89332599a1e2dd197cbd68a54fae407c61666a4f636ce9f8171",
	        "Created": "2021-08-13T01:00:30.98599478Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T01:00:41.748281764Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/0e69308f7031e89332599a1e2dd197cbd68a54fae407c61666a4f636ce9f8171/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e69308f7031e89332599a1e2dd197cbd68a54fae407c61666a4f636ce9f8171/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e69308f7031e89332599a1e2dd197cbd68a54fae407c61666a4f636ce9f8171/hosts",
	        "LogPath": "/var/lib/docker/containers/0e69308f7031e89332599a1e2dd197cbd68a54fae407c61666a4f636ce9f8171/0e69308f7031e89332599a1e2dd197cbd68a54fae407c61666a4f636ce9f8171-json.log",
	        "Name": "/cert-options-20210812180026-27878",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "cert-options-20210812180026-27878:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "cert-options-20210812180026-27878",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bf6b75e69e7f0f1fd77f373f05c5841f25d154266e0bf09748322009ec556845-init/diff:/var/lib/docker/overlay2/f715174260cb84cd45d2861bb5b8ef3bf7f57a79e1ad9faf18f7daceacacdb26/diff:/var/lib/docker/overlay2/a04c6447713cacd6930f9744ac163526e823509f0e887a4ee3657e26d18bb3c2/diff:/var/lib/docker/overlay2/f182bed44ffe14b1144a7c2f7e32e7ce023ac9e2bd863f2c8f0c91ea356c8259/diff:/var/lib/docker/overlay2/5d757d10cdc497158de4bbe8dabf9eedf14626e01df4dd6d35d490ecb30bf9c8/diff:/var/lib/docker/overlay2/422eef072395ee54e0f7179c7e52268b84bf915536d45311ae248126459657af/diff:/var/lib/docker/overlay2/e396f199b6cfca371b722f01a1e2924dfb281ea9dbf61d54b41d2fc22e6aa5c5/diff:/var/lib/docker/overlay2/21d9216959f2e55fe3dbcd4d4f8a3167e37a6c92c0d145e26cd16fd2efe2a1b2/diff:/var/lib/docker/overlay2/614f1da60876e55539ea7711a06227980406a7f5dc8c0c3b793eeda2707573a7/diff:/var/lib/docker/overlay2/0d05a121885c7c744ae4e08c64c98f8df852de51e0ff307e883b2c3fa073efe2/diff:/var/lib/docker/overlay2/3ce100
f425aa005ce54b8df8714ab8266243bb723d7a013361924636464b5c87/diff:/var/lib/docker/overlay2/cf6708a46c9ce9be9f514145a802ec8f2c769c5ea11c1f24e0c3fd20d97dd239/diff:/var/lib/docker/overlay2/8777edd041e50344e34361afdeb431abae3cba4ae7c021d7b22a2422a75fbf42/diff:/var/lib/docker/overlay2/212d3f4628f826d9ccda9072489b3e5fda2680eb2cfe50f42189c50154422be5/diff:/var/lib/docker/overlay2/c21265ca31d93d4bee835caeee6814518af67458c0446a1e12b9b1fb9f3fa8bd/diff:/var/lib/docker/overlay2/f0a961af43f72d95eb930eb0529f9e060b0e909abd40509747de016bdf83791d/diff:/var/lib/docker/overlay2/f1c8cdc84add3afd13a9cfe9d8b243943ff31e904557123ef2fc6b1eeed7799b/diff:/var/lib/docker/overlay2/f0c8c2a078356e23d25e8213bba7d4933434f906e58056d9b186842877f3bcfb/diff:/var/lib/docker/overlay2/b1a5f08de123962e7d6bb8e1ea4ea587df3f3e2c1e85f47d284477a51bf585df/diff:/var/lib/docker/overlay2/3a731dacd005bcbb7d156b44814e8a1b532129e6c39d1043c666dede672e32aa/diff:/var/lib/docker/overlay2/a16aa6163aedd0d4b7b2f4528a727ae92139eacdd31ec5b7e3db5192da8c206e/diff:/var/lib/d
ocker/overlay2/7028c72abbb7165efa88355656c2c161d4c8223e49a64d842d8313501dab8df5/diff:/var/lib/docker/overlay2/a0e99187348d94541befd8a8d0539a3a4a53cced33d78e9e109cd849383d21df/diff:/var/lib/docker/overlay2/9f8525edd155caa1bbc85060598c58d57f46446b27c0b9551a86818fbbeca52b/diff:/var/lib/docker/overlay2/a08301d3003a5683a7777b606058b4db1d148ef83d0ef4343ab7c9ca3059a45e/diff:/var/lib/docker/overlay2/18b883f6cb1aa317952b3a6c012b96254657bab2cd7d68e6ed797e606380216a/diff:/var/lib/docker/overlay2/66df4172dcaf74386e1cef7076213ff46f338dc50921929c03d105ffa2c1a68c/diff:/var/lib/docker/overlay2/53be28e913d78c4777c095cefd722c540583d4f8ce03c6ff7ca3b3c89ab37b9e/diff:/var/lib/docker/overlay2/2eb3ffcdff14b928484aa40e21392c7622808397ddde81596014c9ea1f14722d/diff:/var/lib/docker/overlay2/e31a2b59e27071979e8606deb8bba8cd284962210c0505e59e785e27197278ae/diff:/var/lib/docker/overlay2/3bc51237da47ba3959771beeb969bccc2058ae8d8e91dc367eed0354120af541/diff:/var/lib/docker/overlay2/7d005a1ac0be7776984b9bbfd904a6e1d20810ac8c7e13ed5a82610e174
cf823/diff:/var/lib/docker/overlay2/dea645fc9191a67162267978e1f134969486af076dbc70da7e3761f554a3317c/diff:/var/lib/docker/overlay2/6a3ed620466ebb13eb262fbedbb5bc90976c82d824e3c3ee8d978c8b1cfb12cc/diff:/var/lib/docker/overlay2/7a473817be0962d3e2ae1f57f32e95115af914c56a786f2d4d15a9dca232cefa/diff:/var/lib/docker/overlay2/3ca997de4525080aca8f86ad0f68f4f26acc4262a80846cfc96b3d4af8dd2526/diff:/var/lib/docker/overlay2/ad3ce384b651be2a1810da477a29e598be710b6e40f940a3bb3a4a9ed7ee048d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf6b75e69e7f0f1fd77f373f05c5841f25d154266e0bf09748322009ec556845/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf6b75e69e7f0f1fd77f373f05c5841f25d154266e0bf09748322009ec556845/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf6b75e69e7f0f1fd77f373f05c5841f25d154266e0bf09748322009ec556845/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "cert-options-20210812180026-27878",
	                "Source": "/var/lib/docker/volumes/cert-options-20210812180026-27878/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "cert-options-20210812180026-27878",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8555/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "cert-options-20210812180026-27878",
	                "name.minikube.sigs.k8s.io": "cert-options-20210812180026-27878",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1019520b67ac390f90ab8540c97c7bd05420df88658d2e307cd1c0a1eb4e0da",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49642"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49643"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49644"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49645"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49646"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f1019520b67a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "cert-options-20210812180026-27878": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e69308f7031",
	                        "cert-options-20210812180026-27878"
	                    ],
	                    "NetworkID": "49dce90f0404f264800d909b21ffdf0b7d69086d4c9cdae8a9789be53c1f3b1f",
	                    "EndpointID": "a8970b8adb864af58a9eda65825f3ed0e1d3de2d9ff9e4eb84b4bef399d001ed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-20210812180026-27878 -n cert-options-20210812180026-27878
helpers_test.go:245: <<< TestCertOptions FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestCertOptions]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20210812180026-27878 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p cert-options-20210812180026-27878 logs -n 25: (2.087881851s)
helpers_test.go:253: TestCertOptions logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                      | kubernetes-upgrade-20210812175201-27878 | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:52:01 PDT | Thu, 12 Aug 2021 17:53:04 PDT |
	|         | kubernetes-upgrade-20210812175201-27878 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0            |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                               |                               |
	| stop    | -p                                      | kubernetes-upgrade-20210812175201-27878 | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:53:05 PDT | Thu, 12 Aug 2021 17:53:17 PDT |
	|         | kubernetes-upgrade-20210812175201-27878 |                                         |         |         |                               |                               |
	| start   | -p                                      | kubernetes-upgrade-20210812175201-27878 | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:53:17 PDT | Thu, 12 Aug 2021 17:54:17 PDT |
	|         | kubernetes-upgrade-20210812175201-27878 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0       |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                               |                               |
	| start   | -p                                      | missing-upgrade-20210812175145-27878    | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:52:58 PDT | Thu, 12 Aug 2021 17:54:38 PDT |
	|         | missing-upgrade-20210812175145-27878    |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr -v=1    |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	| start   | -p                                      | kubernetes-upgrade-20210812175201-27878 | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:54:18 PDT | Thu, 12 Aug 2021 17:54:39 PDT |
	|         | kubernetes-upgrade-20210812175201-27878 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0       |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                               |                               |
	| delete  | -p                                      | missing-upgrade-20210812175145-27878    | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:54:38 PDT | Thu, 12 Aug 2021 17:54:53 PDT |
	|         | missing-upgrade-20210812175145-27878    |                                         |         |         |                               |                               |
	| delete  | -p                                      | kubernetes-upgrade-20210812175201-27878 | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:54:40 PDT | Thu, 12 Aug 2021 17:54:57 PDT |
	|         | kubernetes-upgrade-20210812175201-27878 |                                         |         |         |                               |                               |
	| start   | -p                                      | stopped-upgrade-20210812175453-27878    | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:56:28 PDT | Thu, 12 Aug 2021 17:57:17 PDT |
	|         | stopped-upgrade-20210812175453-27878    |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr -v=1    |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	| logs    | -p                                      | stopped-upgrade-20210812175453-27878    | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:57:17 PDT | Thu, 12 Aug 2021 17:57:20 PDT |
	|         | stopped-upgrade-20210812175453-27878    |                                         |         |         |                               |                               |
	| delete  | -p                                      | stopped-upgrade-20210812175453-27878    | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:57:20 PDT | Thu, 12 Aug 2021 17:57:27 PDT |
	|         | stopped-upgrade-20210812175453-27878    |                                         |         |         |                               |                               |
	| start   | -p                                      | force-systemd-env-20210812175727-27878  | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:57:27 PDT | Thu, 12 Aug 2021 17:59:00 PDT |
	|         | force-systemd-env-20210812175727-27878  |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr -v=5    |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	| -p      | force-systemd-env-20210812175727-27878  | force-systemd-env-20210812175727-27878  | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:59:00 PDT | Thu, 12 Aug 2021 17:59:01 PDT |
	|         | ssh docker info --format                |                                         |         |         |                               |                               |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                               |                               |
	| start   | -p                                      | running-upgrade-20210812175457-27878    | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:56:44 PDT | Thu, 12 Aug 2021 17:59:05 PDT |
	|         | running-upgrade-20210812175457-27878    |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr -v=1    |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	| delete  | -p                                      | running-upgrade-20210812175457-27878    | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:59:05 PDT | Thu, 12 Aug 2021 17:59:13 PDT |
	|         | running-upgrade-20210812175457-27878    |                                         |         |         |                               |                               |
	| delete  | -p                                      | flannel-20210812175913-27878            | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:59:13 PDT | Thu, 12 Aug 2021 17:59:13 PDT |
	|         | flannel-20210812175913-27878            |                                         |         |         |                               |                               |
	| delete  | -p                                      | force-systemd-env-20210812175727-27878  | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:59:01 PDT | Thu, 12 Aug 2021 17:59:17 PDT |
	|         | force-systemd-env-20210812175727-27878  |                                         |         |         |                               |                               |
	| start   | -p                                      | docker-flags-20210812175922-27878       | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:59:22 PDT | Thu, 12 Aug 2021 18:00:12 PDT |
	|         | docker-flags-20210812175922-27878       |                                         |         |         |                               |                               |
	|         | --cache-images=false                    |                                         |         |         |                               |                               |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --install-addons=false                  |                                         |         |         |                               |                               |
	|         | --wait=false --docker-env=FOO=BAR       |                                         |         |         |                               |                               |
	|         | --docker-env=BAZ=BAT                    |                                         |         |         |                               |                               |
	|         | --docker-opt=debug                      |                                         |         |         |                               |                               |
	|         | --docker-opt=icc=true                   |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	| -p      | docker-flags-20210812175922-27878       | docker-flags-20210812175922-27878       | jenkins | v1.22.0 | Thu, 12 Aug 2021 18:00:13 PDT | Thu, 12 Aug 2021 18:00:13 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                               |                               |
	|         | --property=Environment --no-pager       |                                         |         |         |                               |                               |
	| -p      | docker-flags-20210812175922-27878       | docker-flags-20210812175922-27878       | jenkins | v1.22.0 | Thu, 12 Aug 2021 18:00:13 PDT | Thu, 12 Aug 2021 18:00:14 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                               |                               |
	|         | --property=ExecStart --no-pager         |                                         |         |         |                               |                               |
	| delete  | -p                                      | docker-flags-20210812175922-27878       | jenkins | v1.22.0 | Thu, 12 Aug 2021 18:00:14 PDT | Thu, 12 Aug 2021 18:00:26 PDT |
	|         | docker-flags-20210812175922-27878       |                                         |         |         |                               |                               |
	| start   | -p                                      | force-systemd-flag-20210812175936-27878 | jenkins | v1.22.0 | Thu, 12 Aug 2021 17:59:36 PDT | Thu, 12 Aug 2021 18:00:40 PDT |
	|         | force-systemd-flag-20210812175936-27878 |                                         |         |         |                               |                               |
	|         | --memory=2048 --force-systemd           |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker  |                                         |         |         |                               |                               |
	| -p      | force-systemd-flag-20210812175936-27878 | force-systemd-flag-20210812175936-27878 | jenkins | v1.22.0 | Thu, 12 Aug 2021 18:00:40 PDT | Thu, 12 Aug 2021 18:00:40 PDT |
	|         | ssh docker info --format                |                                         |         |         |                               |                               |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                               |                               |
	| delete  | -p                                      | force-systemd-flag-20210812175936-27878 | jenkins | v1.22.0 | Thu, 12 Aug 2021 18:00:40 PDT | Thu, 12 Aug 2021 18:00:47 PDT |
	|         | force-systemd-flag-20210812175936-27878 |                                         |         |         |                               |                               |
	| start   | -p                                      | cert-options-20210812180026-27878       | jenkins | v1.22.0 | Thu, 12 Aug 2021 18:00:26 PDT | Thu, 12 Aug 2021 18:01:13 PDT |
	|         | cert-options-20210812180026-27878       |                                         |         |         |                               |                               |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1               |                                         |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15           |                                         |         |         |                               |                               |
	|         | --apiserver-names=localhost             |                                         |         |         |                               |                               |
	|         | --apiserver-names=www.google.com        |                                         |         |         |                               |                               |
	|         | --apiserver-port=8555                   |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	|         | --apiserver-name=localhost              |                                         |         |         |                               |                               |
	| -p      | cert-options-20210812180026-27878       | cert-options-20210812180026-27878       | jenkins | v1.22.0 | Thu, 12 Aug 2021 18:01:13 PDT | Thu, 12 Aug 2021 18:01:14 PDT |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |         |                               |                               |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 18:00:47
	Running on machine: 37310
	Binary: Built with gc go1.16.7 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 18:00:47.370809   41015 out.go:298] Setting OutFile to fd 1 ...
	I0812 18:00:47.370936   41015 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 18:00:47.370941   41015 out.go:311] Setting ErrFile to fd 2...
	I0812 18:00:47.370944   41015 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 18:00:47.371036   41015 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 18:00:47.371370   41015 out.go:305] Setting JSON to false
	I0812 18:00:47.389891   41015 start.go:111] hostinfo: {"hostname":"37310.local","uptime":14421,"bootTime":1628802026,"procs":342,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
	W0812 18:00:47.389978   41015 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0812 18:00:47.417982   41015 out.go:177] * [old-k8s-version-20210812180047-27878] minikube v1.22.0 on Darwin 11.2.3
	I0812 18:00:47.418062   41015 notify.go:169] Checking for updates...
	I0812 18:00:47.464700   41015 out.go:177]   - MINIKUBE_LOCATION=12230
	I0812 18:00:47.490697   41015 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 18:00:47.516641   41015 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0812 18:00:47.542488   41015 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 18:00:47.542998   41015 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 18:00:47.641127   41015 docker.go:132] docker version: linux-20.10.6
	I0812 18:00:47.641250   41015 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 18:00:47.822294   41015 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-13 01:00:47.750440245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 18:00:47.849314   41015 out.go:177] * Using the docker driver based on user configuration
	I0812 18:00:47.849358   41015 start.go:278] selected driver: docker
	I0812 18:00:47.849367   41015 start.go:751] validating driver "docker" against <nil>
	I0812 18:00:47.849376   41015 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0812 18:00:47.851849   41015 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 18:00:48.033292   41015 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-13 01:00:47.961667129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 18:00:48.033394   41015 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0812 18:00:48.033528   41015 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 18:00:48.033544   41015 cni.go:93] Creating CNI manager for ""
	I0812 18:00:48.033552   41015 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0812 18:00:48.033560   41015 start_flags.go:277] config:
	{Name:old-k8s-version-20210812180047-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210812180047-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 18:00:48.060218   41015 out.go:177] * Starting control plane node old-k8s-version-20210812180047-27878 in cluster old-k8s-version-20210812180047-27878
	I0812 18:00:48.060247   41015 cache.go:117] Beginning downloading kic base image for docker with docker
	I0812 18:00:48.086129   41015 out.go:177] * Pulling base image ...
	I0812 18:00:48.086166   41015 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0812 18:00:48.086206   41015 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 18:00:48.086215   41015 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0812 18:00:48.086230   41015 cache.go:56] Caching tarball of preloaded images
	I0812 18:00:48.086333   41015 preload.go:173] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0812 18:00:48.086346   41015 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0812 18:00:48.087221   41015 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/config.json ...
	I0812 18:00:48.087316   41015 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/config.json: {Name:mk2c5014ef1b64f988d3e84fd71ceaa4b30eb7e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:00:48.205355   41015 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 18:00:48.205383   41015 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0812 18:00:48.205396   41015 cache.go:205] Successfully downloaded all kic artifacts
	I0812 18:00:48.205442   41015 start.go:313] acquiring machines lock for old-k8s-version-20210812180047-27878: {Name:mk03c780082dafcc50ab74edc34d6d46b2440be6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 18:00:48.205593   41015 start.go:317] acquired machines lock for "old-k8s-version-20210812180047-27878" in 139.182µs
	I0812 18:00:48.205625   41015 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20210812180047-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210812180047-27878 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0812 18:00:48.205724   41015 start.go:126] createHost starting for "" (driver="docker")
	I0812 18:00:48.232491   41015 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0812 18:00:48.232668   41015 start.go:160] libmachine.API.Create for "old-k8s-version-20210812180047-27878" (driver="docker")
	I0812 18:00:48.232692   41015 client.go:168] LocalClient.Create starting
	I0812 18:00:48.232765   41015 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0812 18:00:48.232811   41015 main.go:130] libmachine: Decoding PEM data...
	I0812 18:00:48.232829   41015 main.go:130] libmachine: Parsing certificate...
	I0812 18:00:48.232922   41015 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0812 18:00:48.232954   41015 main.go:130] libmachine: Decoding PEM data...
	I0812 18:00:48.232966   41015 main.go:130] libmachine: Parsing certificate...
	I0812 18:00:48.233391   41015 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210812180047-27878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0812 18:00:48.345213   41015 cli_runner.go:162] docker network inspect old-k8s-version-20210812180047-27878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0812 18:00:48.345335   41015 network_create.go:255] running [docker network inspect old-k8s-version-20210812180047-27878] to gather additional debugging logs...
	I0812 18:00:48.345367   41015 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210812180047-27878
	W0812 18:00:48.459164   41015 cli_runner.go:162] docker network inspect old-k8s-version-20210812180047-27878 returned with exit code 1
	I0812 18:00:48.459195   41015 network_create.go:258] error running [docker network inspect old-k8s-version-20210812180047-27878]: docker network inspect old-k8s-version-20210812180047-27878: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20210812180047-27878
	I0812 18:00:48.459210   41015 network_create.go:260] output of [docker network inspect old-k8s-version-20210812180047-27878]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20210812180047-27878
	
	** /stderr **
	I0812 18:00:48.459312   41015 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0812 18:00:48.573048   41015 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000658058] misses:0}
	I0812 18:00:48.573086   41015 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 18:00:48.573106   41015 network_create.go:106] attempt to create docker network old-k8s-version-20210812180047-27878 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0812 18:00:48.573196   41015 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210812180047-27878
	W0812 18:00:48.688518   41015 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210812180047-27878 returned with exit code 1
	W0812 18:00:48.688558   41015 network_create.go:98] failed to create docker network old-k8s-version-20210812180047-27878 192.168.49.0/24, will retry: subnet is taken
	I0812 18:00:48.688778   41015 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000658058] amended:false}} dirty:map[] misses:0}
	I0812 18:00:48.688796   41015 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 18:00:48.688972   41015 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000658058] amended:true}} dirty:map[192.168.49.0:0xc000658058 192.168.58.0:0xc00000e140] misses:0}
	I0812 18:00:48.688984   41015 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 18:00:48.688992   41015 network_create.go:106] attempt to create docker network old-k8s-version-20210812180047-27878 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0812 18:00:48.689065   41015 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210812180047-27878
	I0812 18:00:49.743775   41015 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210812180047-27878: (1.054620036s)
	I0812 18:00:49.743802   41015 network_create.go:90] docker network old-k8s-version-20210812180047-27878 192.168.58.0/24 created
	I0812 18:00:49.743827   41015 kic.go:106] calculated static IP "192.168.58.2" for the "old-k8s-version-20210812180047-27878" container
	I0812 18:00:49.743958   41015 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0812 18:00:49.855501   41015 cli_runner.go:115] Run: docker volume create old-k8s-version-20210812180047-27878 --label name.minikube.sigs.k8s.io=old-k8s-version-20210812180047-27878 --label created_by.minikube.sigs.k8s.io=true
	I0812 18:00:49.969029   41015 oci.go:102] Successfully created a docker volume old-k8s-version-20210812180047-27878
	I0812 18:00:49.969162   41015 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20210812180047-27878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210812180047-27878 --entrypoint /usr/bin/test -v old-k8s-version-20210812180047-27878:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0812 18:00:50.449524   41015 oci.go:106] Successfully prepared a docker volume old-k8s-version-20210812180047-27878
	I0812 18:00:50.449603   41015 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0812 18:00:50.449621   41015 kic.go:179] Starting extracting preloaded images to volume ...
	I0812 18:00:50.449673   41015 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0812 18:00:50.449719   41015 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210812180047-27878:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0812 18:00:50.647776   41015 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210812180047-27878 --name old-k8s-version-20210812180047-27878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210812180047-27878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210812180047-27878 --network old-k8s-version-20210812180047-27878 --ip 192.168.58.2 --volume old-k8s-version-20210812180047-27878:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0812 18:00:55.524578   40777 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:54:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-13 01:00:44.716369539 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0812 18:00:55.524610   40777 machine.go:91] provisioned docker machine in 12.442166744s
	I0812 18:00:55.524616   40777 client.go:171] LocalClient.Create took 27.820614771s
	I0812 18:00:55.524631   40777 start.go:168] duration metric: libmachine.API.Create for "cert-options-20210812180026-27878" took 27.820676457s
	I0812 18:00:55.524644   40777 start.go:267] post-start starting for "cert-options-20210812180026-27878" (driver="docker")
	I0812 18:00:55.524647   40777 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 18:00:55.524743   40777 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 18:00:55.524824   40777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210812180026-27878
	I0812 18:00:55.668650   40777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49642 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cert-options-20210812180026-27878/id_rsa Username:docker}
	I0812 18:00:55.756204   40777 ssh_runner.go:149] Run: cat /etc/os-release
	I0812 18:00:55.760182   40777 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0812 18:00:55.760195   40777 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0812 18:00:55.760205   40777 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0812 18:00:55.760210   40777 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0812 18:00:55.760218   40777 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0812 18:00:55.760319   40777 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0812 18:00:55.760468   40777 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem -> 278782.pem in /etc/ssl/certs
	I0812 18:00:55.760638   40777 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0812 18:00:55.768141   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem --> /etc/ssl/certs/278782.pem (1708 bytes)
	I0812 18:00:55.788249   40777 start.go:270] post-start completed in 263.593471ms
	I0812 18:00:55.788775   40777 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20210812180026-27878
	I0812 18:00:55.921150   40777 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/config.json ...
	I0812 18:00:55.921898   40777 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 18:00:55.921961   40777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210812180026-27878
	I0812 18:00:56.052655   40777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49642 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cert-options-20210812180026-27878/id_rsa Username:docker}
	I0812 18:00:56.140206   40777 start.go:129] duration metric: createHost completed in 28.485174252s
	I0812 18:00:56.140220   40777 start.go:80] releasing machines lock for "cert-options-20210812180026-27878", held for 28.485285835s
	I0812 18:00:56.140321   40777 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20210812180026-27878
	I0812 18:00:56.267313   40777 ssh_runner.go:149] Run: systemctl --version
	I0812 18:00:56.267333   40777 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0812 18:00:56.267401   40777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210812180026-27878
	I0812 18:00:56.267416   40777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210812180026-27878
	I0812 18:00:56.409820   40777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49642 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cert-options-20210812180026-27878/id_rsa Username:docker}
	I0812 18:00:56.409856   40777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49642 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cert-options-20210812180026-27878/id_rsa Username:docker}
	I0812 18:00:56.592318   40777 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0812 18:00:56.603856   40777 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0812 18:00:56.618050   40777 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0812 18:00:56.618115   40777 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0812 18:00:56.630302   40777 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 18:00:56.644374   40777 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0812 18:00:56.708986   40777 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0812 18:00:56.769619   40777 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0812 18:00:56.801994   40777 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0812 18:00:56.859898   40777 ssh_runner.go:149] Run: sudo systemctl start docker
	I0812 18:00:56.870287   40777 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0812 18:00:56.917760   40777 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0812 18:00:54.720507   41015 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210812180047-27878 --name old-k8s-version-20210812180047-27878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210812180047-27878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210812180047-27878 --network old-k8s-version-20210812180047-27878 --ip 192.168.58.2 --volume old-k8s-version-20210812180047-27878:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79: (4.07256528s)
	I0812 18:00:54.720626   41015 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210812180047-27878 --format={{.State.Running}}
	I0812 18:00:54.866512   41015 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210812180047-27878 --format={{.State.Status}}
	I0812 18:00:55.016630   41015 cli_runner.go:115] Run: docker exec old-k8s-version-20210812180047-27878 stat /var/lib/dpkg/alternatives/iptables
	I0812 18:00:55.257677   41015 oci.go:278] the created container "old-k8s-version-20210812180047-27878" has a running status.
	I0812 18:00:55.257708   41015 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/old-k8s-version-20210812180047-27878/id_rsa...
	I0812 18:00:55.592537   41015 kic_runner.go:188] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/old-k8s-version-20210812180047-27878/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0812 18:00:55.600720   41015 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210812180047-27878:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (5.150834268s)
	I0812 18:00:55.600749   41015 kic.go:188] duration metric: took 5.151067 seconds to extract preloaded images to volume
	I0812 18:00:55.783436   41015 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210812180047-27878 --format={{.State.Status}}
	I0812 18:00:55.910365   41015 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0812 18:00:55.910389   41015 kic_runner.go:115] Args: [docker exec --privileged old-k8s-version-20210812180047-27878 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0812 18:00:56.095290   41015 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210812180047-27878 --format={{.State.Status}}
	I0812 18:00:56.217730   41015 machine.go:88] provisioning docker machine ...
	I0812 18:00:56.217781   41015 ubuntu.go:169] provisioning hostname "old-k8s-version-20210812180047-27878"
	I0812 18:00:56.217889   41015 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210812180047-27878
	I0812 18:00:56.362049   41015 main.go:130] libmachine: Using SSH client type: native
	I0812 18:00:56.362297   41015 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 49980 <nil> <nil>}
	I0812 18:00:56.362313   41015 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210812180047-27878 && echo "old-k8s-version-20210812180047-27878" | sudo tee /etc/hostname
	I0812 18:00:56.363936   41015 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0812 18:00:56.988022   40777 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0812 18:00:56.988120   40777 cli_runner.go:115] Run: docker exec -t cert-options-20210812180026-27878 dig +short host.docker.internal
	I0812 18:00:57.166500   40777 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0812 18:00:57.166577   40777 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0812 18:00:57.171830   40777 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 18:00:57.181685   40777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20210812180026-27878
	I0812 18:00:57.296541   40777 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 18:00:57.296619   40777 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 18:00:57.331731   40777 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0812 18:00:57.331739   40777 docker.go:466] Images already preloaded, skipping extraction
	I0812 18:00:57.331839   40777 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 18:00:57.367710   40777 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0812 18:00:57.367722   40777 cache_images.go:74] Images are preloaded, skipping loading
	I0812 18:00:57.367814   40777 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0812 18:00:57.450929   40777 cni.go:93] Creating CNI manager for ""
	I0812 18:00:57.450939   40777 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0812 18:00:57.450956   40777 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0812 18:00:57.450975   40777 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8555 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-20210812180026-27878 NodeName:cert-options-20210812180026-27878 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0812 18:00:57.451132   40777 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cert-options-20210812180026-27878"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 18:00:57.451247   40777 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cert-options-20210812180026-27878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210812180026-27878 Namespace:default APIServerName:localhost APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:}
	I0812 18:00:57.451326   40777 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0812 18:00:57.459302   40777 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 18:00:57.459357   40777 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 18:00:57.466510   40777 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0812 18:00:57.480571   40777 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 18:00:57.493427   40777 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0812 18:00:57.506108   40777 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0812 18:00:57.510175   40777 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 18:00:57.519913   40777 certs.go:52] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878 for IP: 192.168.49.2
	I0812 18:00:57.519996   40777 certs.go:179] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0812 18:00:57.520030   40777 certs.go:179] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0812 18:00:57.520093   40777 certs.go:294] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/client.key
	I0812 18:00:57.520104   40777 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/client.crt with IP's: []
	I0812 18:00:57.669150   40777 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/client.crt ...
	I0812 18:00:57.669159   40777 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/client.crt: {Name:mk1294e77f55c008c7573b82de597faba9fe3f6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:00:57.669484   40777 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/client.key ...
	I0812 18:00:57.669490   40777 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/client.key: {Name:mkc545a0a92b54b1f07e426949635db7f5456b3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:00:57.670371   40777 certs.go:294] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.key.0e71f9ad
	I0812 18:00:57.670375   40777 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.crt.0e71f9ad with IP's: [127.0.0.1 192.168.15.15 192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0812 18:00:57.867567   40777 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.crt.0e71f9ad ...
	I0812 18:00:57.867580   40777 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.crt.0e71f9ad: {Name:mkc7afc180d47d601ee71776448a4648142d9446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:00:57.868076   40777 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.key.0e71f9ad ...
	I0812 18:00:57.868082   40777 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.key.0e71f9ad: {Name:mkd7baa5e04963e327d85aaa1fc024a72b49a6b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:00:57.868260   40777 certs.go:305] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.crt.0e71f9ad -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.crt
	I0812 18:00:57.868436   40777 certs.go:309] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.key.0e71f9ad -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.key
	I0812 18:00:57.868591   40777 certs.go:294] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/proxy-client.key
	I0812 18:00:57.868597   40777 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/proxy-client.crt with IP's: []
	I0812 18:00:57.913677   40777 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/proxy-client.crt ...
	I0812 18:00:57.913682   40777 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/proxy-client.crt: {Name:mk1c4742a2bfa6e1ebe36bc8a798f61aa39d8c59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:00:57.913884   40777 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/proxy-client.key ...
	I0812 18:00:57.913891   40777 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/proxy-client.key: {Name:mk673a967ad53a9d6092abeb8dc4c12f989b0355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:00:57.915044   40777 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/27878.pem (1338 bytes)
	W0812 18:00:57.915125   40777 certs.go:369] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/27878_empty.pem, impossibly tiny 0 bytes
	I0812 18:00:57.915155   40777 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 18:00:57.915212   40777 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0812 18:00:57.915248   40777 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0812 18:00:57.915280   40777 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0812 18:00:57.915369   40777 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem (1708 bytes)
	I0812 18:00:57.916228   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1452 bytes)
	I0812 18:00:57.934527   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 18:00:57.951334   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 18:00:57.969659   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cert-options-20210812180026-27878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 18:00:57.986977   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 18:00:58.005797   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0812 18:00:58.022299   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 18:00:58.038782   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0812 18:00:58.055432   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 18:00:58.072220   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/27878.pem --> /usr/share/ca-certificates/27878.pem (1338 bytes)
	I0812 18:00:58.091126   40777 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem --> /usr/share/ca-certificates/278782.pem (1708 bytes)
	I0812 18:00:58.108486   40777 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 18:00:58.121648   40777 ssh_runner.go:149] Run: openssl version
	I0812 18:00:58.127398   40777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/278782.pem && ln -fs /usr/share/ca-certificates/278782.pem /etc/ssl/certs/278782.pem"
	I0812 18:00:58.135345   40777 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/278782.pem
	I0812 18:00:58.139673   40777 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 00:03 /usr/share/ca-certificates/278782.pem
	I0812 18:00:58.139712   40777 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278782.pem
	I0812 18:00:58.145410   40777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/278782.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 18:00:58.153090   40777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 18:00:58.161384   40777 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 18:00:58.167231   40777 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 00:01 /usr/share/ca-certificates/minikubeCA.pem
	I0812 18:00:58.167281   40777 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 18:00:58.172987   40777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 18:00:58.180774   40777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27878.pem && ln -fs /usr/share/ca-certificates/27878.pem /etc/ssl/certs/27878.pem"
	I0812 18:00:58.188276   40777 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/27878.pem
	I0812 18:00:58.192591   40777 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 00:03 /usr/share/ca-certificates/27878.pem
	I0812 18:00:58.192633   40777 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27878.pem
	I0812 18:00:58.198407   40777 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27878.pem /etc/ssl/certs/51391683.0"
	I0812 18:00:58.206203   40777 kubeadm.go:390] StartCluster: {Name:cert-options-20210812180026-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cert-options-20210812180026-27878 Namespace:default APIServerName:localhost APIServerNames:[localhos
t www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 18:00:58.206316   40777 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 18:00:58.239483   40777 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 18:00:58.247254   40777 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 18:00:58.254338   40777 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0812 18:00:58.254390   40777 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 18:00:58.261820   40777 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 18:00:58.261843   40777 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0812 18:00:58.990482   40777 out.go:204]   - Generating certificates and keys ...
	I0812 18:01:01.457967   40777 out.go:204]   - Booting up control plane ...
	I0812 18:00:59.491742   41015 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210812180047-27878
	
	I0812 18:00:59.491860   41015 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210812180047-27878
	I0812 18:00:59.611488   41015 main.go:130] libmachine: Using SSH client type: native
	I0812 18:00:59.611652   41015 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 49980 <nil> <nil>}
	I0812 18:00:59.611670   41015 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210812180047-27878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210812180047-27878/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210812180047-27878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 18:00:59.731869   41015 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0812 18:00:59.731891   41015 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0812 18:00:59.731908   41015 ubuntu.go:177] setting up certificates
	I0812 18:00:59.731915   41015 provision.go:83] configureAuth start
	I0812 18:00:59.732002   41015 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210812180047-27878
	I0812 18:00:59.856499   41015 provision.go:137] copyHostCerts
	I0812 18:00:59.856634   41015 exec_runner.go:145] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0812 18:00:59.856646   41015 exec_runner.go:190] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0812 18:00:59.856742   41015 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0812 18:00:59.856957   41015 exec_runner.go:145] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0812 18:00:59.856970   41015 exec_runner.go:190] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0812 18:00:59.857040   41015 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0812 18:00:59.857229   41015 exec_runner.go:145] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0812 18:00:59.857236   41015 exec_runner.go:190] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0812 18:00:59.857298   41015 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0812 18:00:59.857435   41015 provision.go:111] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210812180047-27878 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210812180047-27878]
	I0812 18:01:00.076001   41015 provision.go:171] copyRemoteCerts
	I0812 18:01:00.076080   41015 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 18:01:00.076145   41015 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210812180047-27878
	I0812 18:01:00.196827   41015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49980 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/old-k8s-version-20210812180047-27878/id_rsa Username:docker}
	I0812 18:01:00.282633   41015 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 18:01:00.301607   41015 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0812 18:01:00.320152   41015 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 18:01:00.340041   41015 provision.go:86] duration metric: configureAuth took 608.105287ms
	I0812 18:01:00.340055   41015 ubuntu.go:193] setting minikube options for container-runtime
	I0812 18:01:00.340297   41015 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210812180047-27878
	I0812 18:01:00.461865   41015 main.go:130] libmachine: Using SSH client type: native
	I0812 18:01:00.462039   41015 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 49980 <nil> <nil>}
	I0812 18:01:00.462051   41015 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0812 18:01:00.582171   41015 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0812 18:01:00.582188   41015 ubuntu.go:71] root file system type: overlay
	I0812 18:01:00.582335   41015 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0812 18:01:00.582447   41015 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210812180047-27878
	I0812 18:01:00.712214   41015 main.go:130] libmachine: Using SSH client type: native
	I0812 18:01:00.712397   41015 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 49980 <nil> <nil>}
	I0812 18:01:00.712459   41015 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0812 18:01:00.840069   41015 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0812 18:01:00.840202   41015 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210812180047-27878
	I0812 18:01:00.958530   41015 main.go:130] libmachine: Using SSH client type: native
	I0812 18:01:00.958698   41015 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 49980 <nil> <nil>}
	I0812 18:01:00.958711   41015 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0812 18:01:11.498462   40777 out.go:204]   - Configuring RBAC rules ...
	I0812 18:01:11.881033   40777 cni.go:93] Creating CNI manager for ""
	I0812 18:01:11.881041   40777 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0812 18:01:11.881062   40777 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 18:01:11.881132   40777 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:01:11.881168   40777 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=cert-options-20210812180026-27878 minikube.k8s.io/updated_at=2021_08_12T18_01_11_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:01:12.001726   40777 kubeadm.go:985] duration metric: took 120.658766ms to wait for elevateKubeSystemPrivileges.
	I0812 18:01:12.001769   40777 ops.go:34] apiserver oom_adj: -16
	I0812 18:01:12.081824   40777 kubeadm.go:392] StartCluster complete in 13.875458532s
	I0812 18:01:12.081839   40777 settings.go:142] acquiring lock: {Name:mk3e1d203e6439798c8d384e90b2bc232b4914ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:01:12.081924   40777 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 18:01:12.082616   40777 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mka81e290e52453cdddcec52ed4fa17d888b133f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:01:12.608000   40777 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cert-options-20210812180026-27878" rescaled to 1
	I0812 18:01:12.608031   40777 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8555 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 18:01:12.608041   40777 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 18:01:12.608063   40777 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0812 18:01:12.634584   40777 out.go:177] * Verifying Kubernetes components...
	I0812 18:01:12.634634   40777 addons.go:59] Setting storage-provisioner=true in profile "cert-options-20210812180026-27878"
	I0812 18:01:12.634635   40777 addons.go:59] Setting default-storageclass=true in profile "cert-options-20210812180026-27878"
	I0812 18:01:12.634650   40777 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-20210812180026-27878"
	I0812 18:01:12.634652   40777 addons.go:135] Setting addon storage-provisioner=true in "cert-options-20210812180026-27878"
	W0812 18:01:12.634655   40777 addons.go:147] addon storage-provisioner should already be in state true
	I0812 18:01:12.634661   40777 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 18:01:12.634674   40777 host.go:66] Checking if "cert-options-20210812180026-27878" exists ...
	I0812 18:01:12.634984   40777 cli_runner.go:115] Run: docker container inspect cert-options-20210812180026-27878 --format={{.State.Status}}
	I0812 18:01:12.635090   40777 cli_runner.go:115] Run: docker container inspect cert-options-20210812180026-27878 --format={{.State.Status}}
	I0812 18:01:12.686667   40777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20210812180026-27878
	I0812 18:01:12.686711   40777 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 18:01:12.822916   40777 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 18:01:12.823033   40777 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 18:01:12.823044   40777 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 18:01:12.823145   40777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210812180026-27878
	I0812 18:01:12.826784   40777 addons.go:135] Setting addon default-storageclass=true in "cert-options-20210812180026-27878"
	W0812 18:01:12.826797   40777 addons.go:147] addon default-storageclass should already be in state true
	I0812 18:01:12.826820   40777 host.go:66] Checking if "cert-options-20210812180026-27878" exists ...
	I0812 18:01:12.827253   40777 cli_runner.go:115] Run: docker container inspect cert-options-20210812180026-27878 --format={{.State.Status}}
	I0812 18:01:12.881339   40777 api_server.go:50] waiting for apiserver process to appear ...
	I0812 18:01:12.881410   40777 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 18:01:12.978422   40777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49642 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cert-options-20210812180026-27878/id_rsa Username:docker}
	I0812 18:01:12.981312   40777 start.go:736] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0812 18:01:12.981355   40777 api_server.go:70] duration metric: took 373.299468ms to wait for apiserver process to appear ...
	I0812 18:01:12.981364   40777 api_server.go:86] waiting for apiserver healthz status ...
	I0812 18:01:12.981377   40777 api_server.go:239] Checking apiserver healthz at https://localhost:49646/healthz ...
	I0812 18:01:12.985493   40777 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 18:01:12.985499   40777 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 18:01:12.985579   40777 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210812180026-27878
	I0812 18:01:12.990988   40777 api_server.go:265] https://localhost:49646/healthz returned 200:
	ok
	I0812 18:01:12.992347   40777 api_server.go:139] control plane version: v1.21.3
	I0812 18:01:12.992355   40777 api_server.go:129] duration metric: took 10.988512ms to wait for apiserver health ...
	I0812 18:01:12.992362   40777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 18:01:13.001906   40777 system_pods.go:59] 4 kube-system pods found
	I0812 18:01:13.001918   40777 system_pods.go:61] "etcd-cert-options-20210812180026-27878" [381d71b1-ee01-4356-8f32-0bdcc19fe576] Pending
	I0812 18:01:13.001921   40777 system_pods.go:61] "kube-apiserver-cert-options-20210812180026-27878" [ac77859a-6013-468e-b1c5-df053e1150c5] Pending
	I0812 18:01:13.001924   40777 system_pods.go:61] "kube-controller-manager-cert-options-20210812180026-27878" [e54176b2-f60f-4702-9d5d-b9cf0695be46] Pending
	I0812 18:01:13.001927   40777 system_pods.go:61] "kube-scheduler-cert-options-20210812180026-27878" [b211bffe-3d4b-4f2f-aab4-d878f3df1602] Pending
	I0812 18:01:13.001929   40777 system_pods.go:74] duration metric: took 9.565324ms to wait for pod list to return data ...
	I0812 18:01:13.001934   40777 kubeadm.go:547] duration metric: took 393.881181ms to wait for : map[apiserver:true system_pods:true] ...
	I0812 18:01:13.001944   40777 node_conditions.go:102] verifying NodePressure condition ...
	I0812 18:01:13.006626   40777 node_conditions.go:122] node storage ephemeral capacity is 123591232Ki
	I0812 18:01:13.006640   40777 node_conditions.go:123] node cpu capacity is 6
	I0812 18:01:13.006650   40777 node_conditions.go:105] duration metric: took 4.703379ms to run NodePressure ...
	I0812 18:01:13.006656   40777 start.go:231] waiting for startup goroutines ...
	I0812 18:01:13.077488   40777 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 18:01:13.112971   40777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49642 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cert-options-20210812180026-27878/id_rsa Username:docker}
	I0812 18:01:13.208408   40777 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 18:01:13.439049   40777 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0812 18:01:13.439089   40777 addons.go:344] enableAddons completed in 831.024845ms
	I0812 18:01:13.499032   40777 start.go:462] kubectl: 1.19.7, cluster: 1.21.3 (minor skew: 2)
	I0812 18:01:13.527723   40777 out.go:177] 
	W0812 18:01:13.527951   40777 out.go:242] ! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.21.3.
	I0812 18:01:13.553709   40777 out.go:177]   - Want kubectl v1.21.3? Try 'minikube kubectl -- get pods -A'
	I0812 18:01:13.616714   40777 out.go:177] * Done! kubectl is now configured to use "cert-options-20210812180026-27878" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2021-08-13 01:00:43 UTC, end at Fri 2021-08-13 01:01:16 UTC. --
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[227]: time="2021-08-13T01:00:46.891064594Z" level=info msg="Daemon shutdown complete"
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[227]: time="2021-08-13T01:00:46.891138080Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 13 01:00:46 cert-options-20210812180026-27878 systemd[1]: docker.service: Succeeded.
	Aug 13 01:00:46 cert-options-20210812180026-27878 systemd[1]: Stopped Docker Application Container Engine.
	Aug 13 01:00:46 cert-options-20210812180026-27878 systemd[1]: Starting Docker Application Container Engine...
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.940484427Z" level=info msg="Starting up"
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.943859506Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.943894253Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.943923779Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.943937838Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.946073749Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.946133015Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.946146679Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.946153567Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.948828281Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.952600265Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.952631698Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Aug 13 01:00:46 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:46.952805872Z" level=info msg="Loading containers: start."
	Aug 13 01:00:52 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:52.219933178Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 13 01:00:55 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:55.078914200Z" level=info msg="Loading containers: done."
	Aug 13 01:00:55 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:55.499704481Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
	Aug 13 01:00:55 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:55.499967851Z" level=info msg="Daemon has completed initialization"
	Aug 13 01:00:55 cert-options-20210812180026-27878 systemd[1]: Started Docker Application Container Engine.
	Aug 13 01:00:55 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:55.541332774Z" level=info msg="API listen on [::]:2376"
	Aug 13 01:00:55 cert-options-20210812180026-27878 dockerd[473]: time="2021-08-13T01:00:55.546444868Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c3681cef6b110       6be0dc1302e30       12 seconds ago      Running             kube-scheduler            0                   371a06770a9d4
	41ddfece61f93       bc2bb319a7038       12 seconds ago      Running             kube-controller-manager   0                   a43518eeb137a
	2382e4a5337e0       3d174f00aa39e       12 seconds ago      Running             kube-apiserver            0                   bd7e87db6e241
	1d2853ca4660d       0369cf4303ffd       12 seconds ago      Running             etcd                      0                   1af1686725120
	
	* 
	* ==> describe nodes <==
	* Name:               cert-options-20210812180026-27878
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-options-20210812180026-27878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=cert-options-20210812180026-27878
	                    minikube.k8s.io/updated_at=2021_08_12T18_01_11_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 01:01:08 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-options-20210812180026-27878
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 01:01:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 01:01:12 +0000   Fri, 13 Aug 2021 01:01:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 01:01:12 +0000   Fri, 13 Aug 2021 01:01:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 01:01:12 +0000   Fri, 13 Aug 2021 01:01:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 13 Aug 2021 01:01:12 +0000   Fri, 13 Aug 2021 01:01:12 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    cert-options-20210812180026-27878
	Capacity:
	  cpu:                6
	  ephemeral-storage:  123591232Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  123591232Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                4f22751f-d513-429a-8e95-a0b03bd2f0b9
	  Boot ID:                    81ba6ff7-1c8f-4710-8da9-98ca721349c4
	  Kernel Version:             5.10.25-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.7
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-cert-options-20210812180026-27878                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5s
	  kube-system                 kube-apiserver-cert-options-20210812180026-27878             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-cert-options-20210812180026-27878    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-cert-options-20210812180026-27878             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 14s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientPID     14s (x3 over 14s)  kubelet  Node cert-options-20210812180026-27878 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13s (x4 over 14s)  kubelet  Node cert-options-20210812180026-27878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x4 over 14s)  kubelet  Node cert-options-20210812180026-27878 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                 kubelet  Node cert-options-20210812180026-27878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet  Node cert-options-20210812180026-27878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet  Node cert-options-20210812180026-27878 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             4s                 kubelet  Node cert-options-20210812180026-27878 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +0.036569] bpfilter: read fail 0
	[  +0.027143] bpfilter: read fail 0
	[  +0.031708] bpfilter: read fail 0
	[  +0.032708] bpfilter: read fail 0
	[  +0.034368] bpfilter: write fail -32
	[  +0.030462] bpfilter: read fail 0
	[  +0.036538] bpfilter: read fail 0
	[  +0.044923] bpfilter: read fail 0
	[  +0.031710] bpfilter: read fail 0
	[  +0.033751] bpfilter: read fail 0
	[  +0.029587] bpfilter: read fail 0
	[  +0.036824] bpfilter: read fail 0
	[  +0.026689] bpfilter: write fail -32
	[  +0.036583] bpfilter: read fail 0
	[  +0.031816] bpfilter: read fail 0
	[  +0.043570] bpfilter: read fail 0
	[  +0.026586] bpfilter: read fail 0
	[  +0.047830] bpfilter: read fail 0
	[  +0.026494] bpfilter: read fail 0
	[  +0.047940] bpfilter: read fail 0
	[  +0.036264] bpfilter: read fail 0
	[  +0.027573] bpfilter: read fail 0
	[  +0.037570] bpfilter: write fail -32
	[  +0.033900] bpfilter: read fail 0
	[  +0.033181] bpfilter: write fail -32
	
	* 
	* ==> etcd [1d2853ca4660] <==
	* raft2021/08/13 01:01:04 INFO: aec36adc501070cc became follower at term 0
	raft2021/08/13 01:01:04 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2021/08/13 01:01:04 INFO: aec36adc501070cc became follower at term 1
	raft2021/08/13 01:01:04 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-13 01:01:04.887878 W | auth: simple token is not cryptographically signed
	2021-08-13 01:01:04.891695 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 01:01:04.891944 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 01:01:04 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-13 01:01:04.892781 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-13 01:01:04.893379 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 01:01:04.893436 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-13 01:01:04.893524 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 01:01:05 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/13 01:01:05 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/13 01:01:05 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/13 01:01:05 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/13 01:01:05 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-13 01:01:05.244141 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 01:01:05.244483 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 01:01:05.244542 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 01:01:05.244549 I | embed: ready to serve client requests
	2021-08-13 01:01:05.244556 I | etcdserver: published {Name:cert-options-20210812180026-27878 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-13 01:01:05.244559 I | embed: ready to serve client requests
	2021-08-13 01:01:05.245443 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 01:01:05.245501 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  01:01:16 up  1:01,  0 users,  load average: 3.22, 3.20, 2.51
	Linux cert-options-20210812180026-27878 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [2382e4a5337e] <==
	* I0813 01:01:08.642944       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0813 01:01:08.642955       1 crd_finalizer.go:266] Starting CRDFinalizer
	E0813 01:01:08.652779       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0813 01:01:08.714189       1 controller.go:611] quota admission added evaluator for: namespaces
	I0813 01:01:08.723270       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0813 01:01:08.738955       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 01:01:08.739004       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0813 01:01:08.741336       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0813 01:01:08.741368       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 01:01:08.741387       1 cache.go:39] Caches are synced for autoregister controller
	I0813 01:01:08.742844       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 01:01:09.637367       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 01:01:09.637429       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 01:01:09.649300       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 01:01:09.651688       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 01:01:09.651716       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 01:01:09.918629       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 01:01:09.946055       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 01:01:10.024423       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0813 01:01:10.025153       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 01:01:10.027502       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 01:01:11.211304       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 01:01:11.696719       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 01:01:11.718237       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 01:01:12.233675       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	
	* 
	* ==> kube-controller-manager [41ddfece61f9] <==
	* I0813 01:01:11.810622       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0813 01:01:11.990596       1 controllermanager.go:574] Started "endpointslice"
	I0813 01:01:11.990647       1 endpointslice_controller.go:256] Starting endpoint slice controller
	I0813 01:01:11.990651       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0813 01:01:12.011930       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	I0813 01:01:12.011961       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0813 01:01:12.011973       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0813 01:01:12.012155       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	I0813 01:01:12.012181       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0813 01:01:12.012193       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0813 01:01:12.012589       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0813 01:01:12.012617       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0813 01:01:12.012638       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0813 01:01:12.012817       1 controllermanager.go:574] Started "csrsigning"
	I0813 01:01:12.012829       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0813 01:01:12.012909       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0813 01:01:12.012845       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	I0813 01:01:12.160562       1 controllermanager.go:574] Started "cronjob"
	I0813 01:01:12.160636       1 cronjob_controllerv2.go:125] Starting cronjob controller v2
	I0813 01:01:12.160640       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0813 01:01:12.311011       1 controllermanager.go:574] Started "tokencleaner"
	I0813 01:01:12.311066       1 tokencleaner.go:118] Starting token cleaner controller
	I0813 01:01:12.311071       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0813 01:01:12.311074       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0813 01:01:12.461440       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [c3681cef6b11] <==
	* I0813 01:01:08.713625       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 01:01:08.713650       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 01:01:08.715395       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 01:01:08.716534       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 01:01:08.716717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 01:01:08.717374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 01:01:08.717408       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 01:01:08.717455       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 01:01:08.717567       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 01:01:08.717570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 01:01:08.717772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 01:01:08.717944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 01:01:08.718102       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 01:01:08.718239       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 01:01:08.717958       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 01:01:08.717971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 01:01:09.581736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 01:01:09.592497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 01:01:09.597951       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 01:01:09.706692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 01:01:09.716936       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 01:01:09.735684       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 01:01:09.737076       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 01:01:09.798144       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0813 01:01:12.113827       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 01:00:43 UTC, end at Fri 2021-08-13 01:01:17 UTC. --
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.720436    2520 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.720504    2520 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.720530    2520 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: E0813 01:01:12.735556    2520 kubelet.go:1683] "Failed creating a mirror pod for" err="pods \"etcd-cert-options-20210812180026-27878\" already exists" pod="kube-system/etcd-cert-options-20210812180026-27878"
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.829878    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8612b8de5935d56df4f5f298e7cb797-etc-ca-certificates\") pod \"kube-controller-manager-cert-options-20210812180026-27878\" (UID: \"d8612b8de5935d56df4f5f298e7cb797\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.829921    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/2532eb444368a951da5e597b0e00443f-etcd-certs\") pod \"etcd-cert-options-20210812180026-27878\" (UID: \"2532eb444368a951da5e597b0e00443f\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.829941    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d9dd2ef0eb0aaea4f48deb9b9373782-etc-ca-certificates\") pod \"kube-apiserver-cert-options-20210812180026-27878\" (UID: \"6d9dd2ef0eb0aaea4f48deb9b9373782\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.829976    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d9dd2ef0eb0aaea4f48deb9b9373782-ca-certs\") pod \"kube-apiserver-cert-options-20210812180026-27878\" (UID: \"6d9dd2ef0eb0aaea4f48deb9b9373782\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.829998    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d9dd2ef0eb0aaea4f48deb9b9373782-k8s-certs\") pod \"kube-apiserver-cert-options-20210812180026-27878\" (UID: \"6d9dd2ef0eb0aaea4f48deb9b9373782\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830027    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d9dd2ef0eb0aaea4f48deb9b9373782-usr-local-share-ca-certificates\") pod \"kube-apiserver-cert-options-20210812180026-27878\" (UID: \"6d9dd2ef0eb0aaea4f48deb9b9373782\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830060    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d9dd2ef0eb0aaea4f48deb9b9373782-usr-share-ca-certificates\") pod \"kube-apiserver-cert-options-20210812180026-27878\" (UID: \"6d9dd2ef0eb0aaea4f48deb9b9373782\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830085    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8612b8de5935d56df4f5f298e7cb797-ca-certs\") pod \"kube-controller-manager-cert-options-20210812180026-27878\" (UID: \"d8612b8de5935d56df4f5f298e7cb797\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830106    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d8612b8de5935d56df4f5f298e7cb797-flexvolume-dir\") pod \"kube-controller-manager-cert-options-20210812180026-27878\" (UID: \"d8612b8de5935d56df4f5f298e7cb797\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830126    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8612b8de5935d56df4f5f298e7cb797-k8s-certs\") pod \"kube-controller-manager-cert-options-20210812180026-27878\" (UID: \"d8612b8de5935d56df4f5f298e7cb797\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830145    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8612b8de5935d56df4f5f298e7cb797-kubeconfig\") pod \"kube-controller-manager-cert-options-20210812180026-27878\" (UID: \"d8612b8de5935d56df4f5f298e7cb797\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830169    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8612b8de5935d56df4f5f298e7cb797-usr-local-share-ca-certificates\") pod \"kube-controller-manager-cert-options-20210812180026-27878\" (UID: \"d8612b8de5935d56df4f5f298e7cb797\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830197    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/2532eb444368a951da5e597b0e00443f-etcd-data\") pod \"etcd-cert-options-20210812180026-27878\" (UID: \"2532eb444368a951da5e597b0e00443f\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830233    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8612b8de5935d56df4f5f298e7cb797-usr-share-ca-certificates\") pod \"kube-controller-manager-cert-options-20210812180026-27878\" (UID: \"d8612b8de5935d56df4f5f298e7cb797\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.830254    2520 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/085e9d5a0247f1b69e5e732b90bddf9a-kubeconfig\") pod \"kube-scheduler-cert-options-20210812180026-27878\" (UID: \"085e9d5a0247f1b69e5e732b90bddf9a\") "
	Aug 13 01:01:12 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:12.917357    2520 apiserver.go:52] "Watching apiserver"
	Aug 13 01:01:13 cert-options-20210812180026-27878 kubelet[2520]: I0813 01:01:13.133208    2520 reconciler.go:157] "Reconciler: start to sync state"
	Aug 13 01:01:13 cert-options-20210812180026-27878 kubelet[2520]: E0813 01:01:13.921005    2520 kubelet.go:1683] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cert-options-20210812180026-27878\" already exists" pod="kube-system/kube-apiserver-cert-options-20210812180026-27878"
	Aug 13 01:01:14 cert-options-20210812180026-27878 kubelet[2520]: E0813 01:01:14.120476    2520 kubelet.go:1683] "Failed creating a mirror pod for" err="pods \"etcd-cert-options-20210812180026-27878\" already exists" pod="kube-system/etcd-cert-options-20210812180026-27878"
	Aug 13 01:01:14 cert-options-20210812180026-27878 kubelet[2520]: E0813 01:01:14.351111    2520 kubelet.go:1683] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-cert-options-20210812180026-27878\" already exists" pod="kube-system/kube-controller-manager-cert-options-20210812180026-27878"
	Aug 13 01:01:14 cert-options-20210812180026-27878 kubelet[2520]: E0813 01:01:14.521500    2520 kubelet.go:1683] "Failed creating a mirror pod for" err="pods \"kube-scheduler-cert-options-20210812180026-27878\" already exists" pod="kube-system/kube-scheduler-cert-options-20210812180026-27878"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p cert-options-20210812180026-27878 -n cert-options-20210812180026-27878
helpers_test.go:262: (dbg) Run:  kubectl --context cert-options-20210812180026-27878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:262: (dbg) Done: kubectl --context cert-options-20210812180026-27878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.372301365s)
helpers_test.go:271: non-running pods: storage-provisioner
helpers_test.go:273: ======> post-mortem[TestCertOptions]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context cert-options-20210812180026-27878 describe pod storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context cert-options-20210812180026-27878 describe pod storage-provisioner: exit status 1 (64.03484ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context cert-options-20210812180026-27878 describe pod storage-provisioner: exit status 1
helpers_test.go:176: Cleaning up "cert-options-20210812180026-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20210812180026-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20210812180026-27878: (11.021242425s)
--- FAIL: TestCertOptions (64.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:249: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:249: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.012174122s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:252: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:259: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:262: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:266: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 4 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:281: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:281: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.006919305s)
functional_test_tunnel_test.go:291: expected body to contain "127.0.0.1", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (35.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:349: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:356: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (35.94s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (64.11s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/Users/jenkins/workspace/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/Users/jenkins/workspace/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": exit status 1 (1m4.106360985s)

                                                
                                                
-- stdout --
	Get:1 http://deb.debian.org/debian sid InRelease [165 kB]
	Get:2 http://deb.debian.org/debian sid/main arm64 Packages [8519 kB]
	Fetched 8684 kB in 5s (1794 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dmeventd dmsetup libaio1 libapparmor1 libbrotli1 libbsd0
	  libcurl3-gnutls libdevmapper-event1.02.1 libdevmapper1.02.1 libedit2
	  libexpat1 libglib2.0-0 libglib2.0-data libicu67 libldap-2.4-2 libldap-common
	  liblvm2cmd2.03 libmd0 libnghttp2-14 libnl-3-200 libnuma1 libpsl5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libxml2 libyajl2
	  lvm2 openssl publicsuffix shared-mime-info thin-provisioning-tools
	  xdg-user-dirs
	Suggested packages:
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql
	The following NEW packages will be installed:
	  ca-certificates dmeventd dmsetup libaio1 libapparmor1 libbrotli1 libbsd0
	  libcurl3-gnutls libdevmapper-event1.02.1 libdevmapper1.02.1 libedit2
	  libexpat1 libglib2.0-0 libglib2.0-data libicu67 libldap-2.4-2 libldap-common
	  liblvm2cmd2.03 libmd0 libnghttp2-14 libnl-3-200 libnuma1 libpsl5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libvirt0 libxml2
	  libyajl2 lvm2 openssl publicsuffix shared-mime-info thin-provisioning-tools
	  xdg-user-dirs
	0 upgraded, 37 newly installed, 0 to remove and 24 not upgraded.
	Need to get 21.6 MB of archives.
	After this operation, 95.2 MB of additional disk space will be used.
	Get:1 http://deb.debian.org/debian sid/main arm64 libaio1 arm64 0.3.112-9 [12.3 kB]
	Get:2 http://deb.debian.org/debian sid/main arm64 dmsetup arm64 2:1.02.175-2.1 [85.1 kB]
	Get:3 http://deb.debian.org/debian sid/main arm64 libdevmapper1.02.1 arm64 2:1.02.175-2.1 [126 kB]
	Get:4 http://deb.debian.org/debian sid/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.175-2.1 [22.4 kB]
	Get:5 http://deb.debian.org/debian sid/main arm64 libmd0 arm64 1.0.3-3 [27.9 kB]
	Get:6 http://deb.debian.org/debian sid/main arm64 libbsd0 arm64 0.11.3-1 [106 kB]
	Get:7 http://deb.debian.org/debian sid/main arm64 libedit2 arm64 3.1-20191231-2+b1 [92.1 kB]
	Get:8 http://deb.debian.org/debian sid/main arm64 liblvm2cmd2.03 arm64 2.03.11-2.1 [608 kB]
	Get:9 http://deb.debian.org/debian sid/main arm64 dmeventd arm64 2:1.02.175-2.1 [66.5 kB]
	Get:10 http://deb.debian.org/debian sid/main arm64 lvm2 arm64 2.03.11-2.1 [1086 kB]
	Get:11 http://deb.debian.org/debian sid/main arm64 openssl arm64 1.1.1k-1 [829 kB]
	Get:12 http://deb.debian.org/debian sid/main arm64 ca-certificates all 20210119 [158 kB]
	Get:13 http://deb.debian.org/debian sid/main arm64 libapparmor1 arm64 2.13.6-10 [98.5 kB]
	Get:14 http://deb.debian.org/debian sid/main arm64 libbrotli1 arm64 1.0.9-2+b2 [267 kB]
	Get:15 http://deb.debian.org/debian sid/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-2.1 [69.3 kB]
	Get:16 http://deb.debian.org/debian sid/main arm64 libsasl2-2 arm64 2.1.27+dfsg-2.1 [105 kB]
	Get:17 http://deb.debian.org/debian sid/main arm64 libldap-2.4-2 arm64 2.4.57+dfsg-3 [222 kB]
	Get:18 http://deb.debian.org/debian sid/main arm64 libnghttp2-14 arm64 1.43.0-1 [73.8 kB]
	Get:19 http://deb.debian.org/debian sid/main arm64 libpsl5 arm64 0.21.0-1.2 [57.1 kB]
	Get:20 http://deb.debian.org/debian sid/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2+b2 [59.4 kB]
	Get:21 http://deb.debian.org/debian sid/main arm64 libssh2-1 arm64 1.9.0-3 [162 kB]
	Get:22 http://deb.debian.org/debian sid/main arm64 libcurl3-gnutls arm64 7.74.0-1.3+b1 [318 kB]
	Get:23 http://deb.debian.org/debian sid/main arm64 libexpat1 arm64 2.2.10-2 [83.1 kB]
	Get:24 http://deb.debian.org/debian sid/main arm64 libglib2.0-0 arm64 2.66.8-1 [1286 kB]
	Get:25 http://deb.debian.org/debian sid/main arm64 libglib2.0-data all 2.66.8-1 [1164 kB]
	Get:26 http://deb.debian.org/debian sid/main arm64 libicu67 arm64 67.1-7 [8467 kB]
	Get:27 http://deb.debian.org/debian sid/main arm64 libldap-common all 2.4.57+dfsg-3 [95.9 kB]
	Get:28 http://deb.debian.org/debian sid/main arm64 libnl-3-200 arm64 3.4.0-1+b1 [60.6 kB]
	Get:29 http://deb.debian.org/debian sid/main arm64 libnuma1 arm64 2.0.12-1+b1 [25.8 kB]
	Get:30 http://deb.debian.org/debian sid/main arm64 libsasl2-modules arm64 2.1.27+dfsg-2.1 [101 kB]
	Get:31 http://deb.debian.org/debian sid/main arm64 libxml2 arm64 2.9.10+dfsg-6.7 [629 kB]
	Get:32 http://deb.debian.org/debian sid/main arm64 libyajl2 arm64 2.1.0-3 [22.9 kB]
	Get:33 http://deb.debian.org/debian sid/main arm64 libvirt0 arm64 7.0.0-3 [3749 kB]
	Get:34 http://deb.debian.org/debian sid/main arm64 publicsuffix all 20210108.1309-1 [121 kB]
	Get:35 http://deb.debian.org/debian sid/main arm64 shared-mime-info arm64 2.0-1 [700 kB]
	Get:36 http://deb.debian.org/debian sid/main arm64 thin-provisioning-tools arm64 0.9.0-1 [348 kB]
	Get:37 http://deb.debian.org/debian sid/main arm64 xdg-user-dirs arm64 0.17-2 [53.2 kB]
	Fetched 21.6 MB in 1s (24.1 MB/s)
	Selecting previously unselected package libaio1:arm64.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6644 files and directories currently installed.)
	Preparing to unpack .../00-libaio1_0.3.112-9_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-9) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../01-dmsetup_2%3a1.02.175-2.1_arm64.deb ...
	Unpacking dmsetup (2:1.02.175-2.1) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../02-libdevmapper1.02.1_2%3a1.02.175-2.1_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.175-2.1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../03-libdevmapper-event1.02.1_2%3a1.02.175-2.1_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.175-2.1) ...
	Selecting previously unselected package libmd0:arm64.
	Preparing to unpack .../04-libmd0_1.0.3-3_arm64.deb ...
	Unpacking libmd0:arm64 (1.0.3-3) ...
	Selecting previously unselected package libbsd0:arm64.
	Preparing to unpack .../05-libbsd0_0.11.3-1_arm64.deb ...
	Unpacking libbsd0:arm64 (0.11.3-1) ...
	Selecting previously unselected package libedit2:arm64.
	Preparing to unpack .../06-libedit2_3.1-20191231-2+b1_arm64.deb ...
	Unpacking libedit2:arm64 (3.1-20191231-2+b1) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../07-liblvm2cmd2.03_2.03.11-2.1_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.11-2.1) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../08-dmeventd_2%3a1.02.175-2.1_arm64.deb ...
	Unpacking dmeventd (2:1.02.175-2.1) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../09-lvm2_2.03.11-2.1_arm64.deb ...
	Unpacking lvm2 (2.03.11-2.1) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../10-openssl_1.1.1k-1_arm64.deb ...
	Unpacking openssl (1.1.1k-1) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../11-ca-certificates_20210119_all.deb ...
	Unpacking ca-certificates (20210119) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../12-libapparmor1_2.13.6-10_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.6-10) ...
	Selecting previously unselected package libbrotli1:arm64.
	Preparing to unpack .../13-libbrotli1_1.0.9-2+b2_arm64.deb ...
	Unpacking libbrotli1:arm64 (1.0.9-2+b2) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../14-libsasl2-modules-db_2.1.27+dfsg-2.1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-2.1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../15-libsasl2-2_2.1.27+dfsg-2.1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-2.1) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../16-libldap-2.4-2_2.4.57+dfsg-3_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.57+dfsg-3) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../17-libnghttp2-14_1.43.0-1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.43.0-1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../18-libpsl5_0.21.0-1.2_arm64.deb ...
	Unpacking libpsl5:arm64 (0.21.0-1.2) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../19-librtmp1_2.4+20151223.gitfa8646d.1-2+b2_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2+b2) ...
	Selecting previously unselected package libssh2-1:arm64.
	Preparing to unpack .../20-libssh2-1_1.9.0-3_arm64.deb ...
	Unpacking libssh2-1:arm64 (1.9.0-3) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../21-libcurl3-gnutls_7.74.0-1.3+b1_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.74.0-1.3+b1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../22-libexpat1_2.2.10-2_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.10-2) ...
	Selecting previously unselected package libglib2.0-0:arm64.
	Preparing to unpack .../23-libglib2.0-0_2.66.8-1_arm64.deb ...
	Unpacking libglib2.0-0:arm64 (2.66.8-1) ...
	Selecting previously unselected package libglib2.0-data.
	Preparing to unpack .../24-libglib2.0-data_2.66.8-1_all.deb ...
	Unpacking libglib2.0-data (2.66.8-1) ...
	Selecting previously unselected package libicu67:arm64.
	Preparing to unpack .../25-libicu67_67.1-7_arm64.deb ...
	Unpacking libicu67:arm64 (67.1-7) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../26-libldap-common_2.4.57+dfsg-3_all.deb ...
	Unpacking libldap-common (2.4.57+dfsg-3) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../27-libnl-3-200_3.4.0-1+b1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1+b1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../28-libnuma1_2.0.12-1+b1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1+b1) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../29-libsasl2-modules_2.1.27+dfsg-2.1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-2.1) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../30-libxml2_2.9.10+dfsg-6.7_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.10+dfsg-6.7) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../31-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../32-libvirt0_7.0.0-3_arm64.deb ...
	Unpacking libvirt0:arm64 (7.0.0-3) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../33-publicsuffix_20210108.1309-1_all.deb ...
	Unpacking publicsuffix (20210108.1309-1) ...
	Selecting previously unselected package shared-mime-info.
	Preparing to unpack .../34-shared-mime-info_2.0-1_arm64.deb ...
	Unpacking shared-mime-info (2.0-1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../35-thin-provisioning-tools_0.9.0-1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.9.0-1) ...
	Selecting previously unselected package xdg-user-dirs.
	Preparing to unpack .../36-xdg-user-dirs_0.17-2_arm64.deb ...
	Unpacking xdg-user-dirs (0.17-2) ...
	Setting up libexpat1:arm64 (2.2.10-2) ...
	Setting up libapparmor1:arm64 (2.13.6-10) ...
	Setting up libpsl5:arm64 (0.21.0-1.2) ...
	Setting up libicu67:arm64 (67.1-7) ...
	Setting up xdg-user-dirs (0.17-2) ...
	Setting up libglib2.0-0:arm64 (2.66.8-1) ...
	No schema files found: doing nothing.
	Setting up libbrotli1:arm64 (1.0.9-2+b2) ...
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-2.1) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.43.0-1) ...
	Setting up libldap-common (2.4.57+dfsg-3) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-2.1) ...
	Setting up libglib2.0-data (2.66.8-1) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2+b2) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-2.1) ...
	Setting up libnuma1:arm64 (2.0.12-1+b1) ...
	Setting up libmd0:arm64 (1.0.3-3) ...
	Setting up libnl-3-200:arm64 (3.4.0-1+b1) ...
	Setting up libssh2-1:arm64 (1.9.0-3) ...
	Setting up libaio1:arm64 (0.3.112-9) ...
	Setting up openssl (1.1.1k-1) ...
	Setting up libbsd0:arm64 (0.11.3-1) ...
	Setting up publicsuffix (20210108.1309-1) ...
	Setting up libxml2:arm64 (2.9.10+dfsg-6.7) ...
	Setting up libedit2:arm64 (3.1-20191231-2+b1) ...
	Setting up libldap-2.4-2:arm64 (2.4.57+dfsg-3) ...
	Setting up libcurl3-gnutls:arm64 (7.74.0-1.3+b1) ...
	Setting up ca-certificates (20210119) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/aarch64-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl-base /usr/lib/aarch64-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up shared-mime-info (2.0-1) ...
	Setting up thin-provisioning-tools (0.9.0-1) ...
	Setting up libvirt0:arm64 (7.0.0-3) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.11-2.1) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.175-2.1) ...
	Setting up dmsetup (2:1.02.175-2.1) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.175-2.1) ...
	Setting up dmeventd (2:1.02.175-2.1) ...
	Setting up lvm2 (2.03.11-2.1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.31-12) ...
	Processing triggers for ca-certificates (20210119) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb (--install):
	 package architecture (amd64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/Users/jenkins/workspace/out/docker-machine-driver-kvm2_1.22.0-0_amd64.deb" on "debian:sid": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (64.11s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (66.68s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/Users/jenkins/workspace/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/Users/jenkins/workspace/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": exit status 1 (1m6.680857261s)

                                                
                                                
-- stdout --
	Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
	Get:2 http://deb.debian.org/debian buster InRelease [122 kB]
	Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
	Get:4 http://security.debian.org/debian-security buster/updates/main arm64 Packages [295 kB]
	Get:5 http://deb.debian.org/debian buster/main arm64 Packages [7735 kB]
	Get:6 http://deb.debian.org/debian buster-updates/main arm64 Packages [14.5 kB]
	Fetched 8284 kB in 6s (1464 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libavahi-client3 libavahi-common-data libavahi-common3 libcurl3-gnutls
	  libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1
	  libgssapi-krb5-2 libicu63 libk5crypto3 libkeyutils1 libkrb5-3
	  libkrb5support0 libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14
	  libnl-3-200 libnl-route-3-200 libnuma1 libpsl5 libreadline5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 libxml2
	  libyajl2 lsb-base lvm2 openssl publicsuffix readline-common
	  thin-provisioning-tools
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libavahi-client3 libavahi-common-data libavahi-common3 libcurl3-gnutls
	  libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1
	  libgssapi-krb5-2 libicu63 libk5crypto3 libkeyutils1 libkrb5-3
	  libkrb5support0 libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14
	  libnl-3-200 libnl-route-3-200 libnuma1 libpsl5 libreadline5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 libvirt0
	  libxml2 libyajl2 lsb-base lvm2 openssl publicsuffix readline-common
	  thin-provisioning-tools
	0 upgraded, 45 newly installed, 0 to remove and 2 not upgraded.
	Need to get 21.7 MB of archives.
	After this operation, 66.7 MB of additional disk space will be used.
	Get:1 http://security.debian.org/debian-security buster/updates/main arm64 krb5-locales all 1.17-3+deb10u2 [95.5 kB]
	Get:2 http://deb.debian.org/debian buster/main arm64 readline-common all 7.0-5 [70.6 kB]
	Get:3 http://deb.debian.org/debian buster/main arm64 libapparmor1 arm64 2.13.2-10 [93.8 kB]
	Get:4 http://security.debian.org/debian-security buster/updates/main arm64 libkrb5support0 arm64 1.17-3+deb10u2 [64.9 kB]
	Get:5 http://security.debian.org/debian-security buster/updates/main arm64 libk5crypto3 arm64 1.17-3+deb10u2 [123 kB]
	Get:6 http://deb.debian.org/debian buster/main arm64 libdbus-1-3 arm64 1.12.20-0+deb10u1 [206 kB]
	Get:7 http://security.debian.org/debian-security buster/updates/main arm64 libkrb5-3 arm64 1.17-3+deb10u2 [351 kB]
	Get:8 http://deb.debian.org/debian buster/main arm64 libexpat1 arm64 2.2.6-2+deb10u1 [85.4 kB]
	Get:9 http://security.debian.org/debian-security buster/updates/main arm64 libgssapi-krb5-2 arm64 1.17-3+deb10u2 [150 kB]
	Get:10 http://deb.debian.org/debian buster/main arm64 dbus arm64 1.12.20-0+deb10u1 [227 kB]
	Get:11 http://deb.debian.org/debian buster/main arm64 libkeyutils1 arm64 1.6-6 [14.9 kB]
	Get:12 http://deb.debian.org/debian buster/main arm64 libssl1.1 arm64 1.1.1d-0+deb10u6 [1382 kB]
	Get:13 http://deb.debian.org/debian buster/main arm64 openssl arm64 1.1.1d-0+deb10u6 [823 kB]
	Get:14 http://deb.debian.org/debian buster/main arm64 ca-certificates all 20200601~deb10u2 [166 kB]
	Get:15 http://deb.debian.org/debian buster/main arm64 dmsetup arm64 2:1.02.155-3 [83.9 kB]
	Get:16 http://deb.debian.org/debian buster/main arm64 libdevmapper1.02.1 arm64 2:1.02.155-3 [124 kB]
	Get:17 http://deb.debian.org/debian buster/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.155-3 [21.7 kB]
	Get:18 http://deb.debian.org/debian buster/main arm64 libaio1 arm64 0.3.112-3 [11.1 kB]
	Get:19 http://deb.debian.org/debian buster/main arm64 liblvm2cmd2.03 arm64 2.03.02-3 [550 kB]
	Get:20 http://deb.debian.org/debian buster/main arm64 dmeventd arm64 2:1.02.155-3 [63.9 kB]
	Get:21 http://deb.debian.org/debian buster/main arm64 libavahi-common-data arm64 0.7-4+deb10u1 [122 kB]
	Get:22 http://deb.debian.org/debian buster/main arm64 libavahi-common3 arm64 0.7-4+deb10u1 [53.4 kB]
	Get:23 http://deb.debian.org/debian buster/main arm64 libavahi-client3 arm64 0.7-4+deb10u1 [56.9 kB]
	Get:24 http://deb.debian.org/debian buster/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-1+deb10u1 [69.3 kB]
	Get:25 http://deb.debian.org/debian buster/main arm64 libsasl2-2 arm64 2.1.27+dfsg-1+deb10u1 [105 kB]
	Get:26 http://deb.debian.org/debian buster/main arm64 libldap-common all 2.4.47+dfsg-3+deb10u6 [90.0 kB]
	Get:27 http://deb.debian.org/debian buster/main arm64 libldap-2.4-2 arm64 2.4.47+dfsg-3+deb10u6 [216 kB]
	Get:28 http://deb.debian.org/debian buster/main arm64 libnghttp2-14 arm64 1.36.0-2+deb10u1 [81.9 kB]
	Get:29 http://deb.debian.org/debian buster/main arm64 libpsl5 arm64 0.20.2-2 [53.6 kB]
	Get:30 http://deb.debian.org/debian buster/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2 [55.7 kB]
	Get:31 http://deb.debian.org/debian buster/main arm64 libssh2-1 arm64 1.8.0-2.1 [135 kB]
	Get:32 http://deb.debian.org/debian buster/main arm64 libcurl3-gnutls arm64 7.64.0-4+deb10u2 [311 kB]
	Get:33 http://deb.debian.org/debian buster/main arm64 libicu63 arm64 63.1-6+deb10u1 [8151 kB]
	Get:34 http://deb.debian.org/debian buster/main arm64 libnl-3-200 arm64 3.4.0-1 [54.9 kB]
	Get:35 http://deb.debian.org/debian buster/main arm64 libnl-route-3-200 arm64 3.4.0-1 [134 kB]
	Get:36 http://deb.debian.org/debian buster/main arm64 libnuma1 arm64 2.0.12-1 [25.6 kB]
	Get:37 http://deb.debian.org/debian buster/main arm64 libreadline5 arm64 5.2+dfsg-3+b13 [113 kB]
	Get:38 http://deb.debian.org/debian buster/main arm64 libsasl2-modules arm64 2.1.27+dfsg-1+deb10u1 [102 kB]
	Get:39 http://deb.debian.org/debian buster/main arm64 libxml2 arm64 2.9.4+dfsg1-7+deb10u2 [625 kB]
	Get:40 http://deb.debian.org/debian buster/main arm64 libyajl2 arm64 2.1.0-3 [22.9 kB]
	Get:41 http://deb.debian.org/debian buster/main arm64 libvirt0 arm64 5.0.0-4+deb10u1 [4939 kB]
	Get:42 http://deb.debian.org/debian buster/main arm64 lsb-base all 10.2019051400 [28.4 kB]
	Get:43 http://deb.debian.org/debian buster/main arm64 lvm2 arm64 2.03.02-3 [1011 kB]
	Get:44 http://deb.debian.org/debian buster/main arm64 publicsuffix all 20190415.1030-1 [116 kB]
	Get:45 http://deb.debian.org/debian buster/main arm64 thin-provisioning-tools arm64 0.7.6-2.1 [318 kB]
	Fetched 21.7 MB in 1s (26.0 MB/s)
	Selecting previously unselected package readline-common.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6670 files and directories currently installed.)
	Preparing to unpack .../00-readline-common_7.0-5_all.deb ...
	Unpacking readline-common (7.0-5) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../01-libapparmor1_2.13.2-10_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.2-10) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../02-libdbus-1-3_1.12.20-0+deb10u1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.20-0+deb10u1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../03-libexpat1_2.2.6-2+deb10u1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.6-2+deb10u1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../04-dbus_1.12.20-0+deb10u1_arm64.deb ...
	Unpacking dbus (1.12.20-0+deb10u1) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../05-krb5-locales_1.17-3+deb10u2_all.deb ...
	Unpacking krb5-locales (1.17-3+deb10u2) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../06-libkeyutils1_1.6-6_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.6-6) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../07-libkrb5support0_1.17-3+deb10u2_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.17-3+deb10u2) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../08-libk5crypto3_1.17-3+deb10u2_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.17-3+deb10u2) ...
	Selecting previously unselected package libssl1.1:arm64.
	Preparing to unpack .../09-libssl1.1_1.1.1d-0+deb10u6_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1d-0+deb10u6) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../10-libkrb5-3_1.17-3+deb10u2_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.17-3+deb10u2) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../11-libgssapi-krb5-2_1.17-3+deb10u2_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.17-3+deb10u2) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../12-openssl_1.1.1d-0+deb10u6_arm64.deb ...
	Unpacking openssl (1.1.1d-0+deb10u6) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../13-ca-certificates_20200601~deb10u2_all.deb ...
	Unpacking ca-certificates (20200601~deb10u2) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../14-dmsetup_2%3a1.02.155-3_arm64.deb ...
	Unpacking dmsetup (2:1.02.155-3) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../15-libdevmapper1.02.1_2%3a1.02.155-3_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.155-3) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../16-libdevmapper-event1.02.1_2%3a1.02.155-3_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.155-3) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../17-libaio1_0.3.112-3_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-3) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../18-liblvm2cmd2.03_2.03.02-3_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.02-3) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../19-dmeventd_2%3a1.02.155-3_arm64.deb ...
	Unpacking dmeventd (2:1.02.155-3) ...
	Selecting previously unselected package libavahi-common-data:arm64.
	Preparing to unpack .../20-libavahi-common-data_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-common-data:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libavahi-common3:arm64.
	Preparing to unpack .../21-libavahi-common3_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-common3:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libavahi-client3:arm64.
	Preparing to unpack .../22-libavahi-client3_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-client3:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../23-libsasl2-modules-db_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../24-libsasl2-2_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../25-libldap-common_2.4.47+dfsg-3+deb10u6_all.deb ...
	Unpacking libldap-common (2.4.47+dfsg-3+deb10u6) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../26-libldap-2.4-2_2.4.47+dfsg-3+deb10u6_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.47+dfsg-3+deb10u6) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../27-libnghttp2-14_1.36.0-2+deb10u1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.36.0-2+deb10u1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../28-libpsl5_0.20.2-2_arm64.deb ...
	Unpacking libpsl5:arm64 (0.20.2-2) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../29-librtmp1_2.4+20151223.gitfa8646d.1-2_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2) ...
	Selecting previously unselected package libssh2-1:arm64.
	Preparing to unpack .../30-libssh2-1_1.8.0-2.1_arm64.deb ...
	Unpacking libssh2-1:arm64 (1.8.0-2.1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../31-libcurl3-gnutls_7.64.0-4+deb10u2_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.64.0-4+deb10u2) ...
	Selecting previously unselected package libicu63:arm64.
	Preparing to unpack .../32-libicu63_63.1-6+deb10u1_arm64.deb ...
	Unpacking libicu63:arm64 (63.1-6+deb10u1) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../33-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libnl-route-3-200:arm64.
	Preparing to unpack .../34-libnl-route-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-route-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../35-libnuma1_2.0.12-1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../36-libreadline5_5.2+dfsg-3+b13_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3+b13) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../37-libsasl2-modules_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../38-libxml2_2.9.4+dfsg1-7+deb10u2_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.4+dfsg1-7+deb10u2) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../39-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../40-libvirt0_5.0.0-4+deb10u1_arm64.deb ...
	Unpacking libvirt0:arm64 (5.0.0-4+deb10u1) ...
	Selecting previously unselected package lsb-base.
	Preparing to unpack .../41-lsb-base_10.2019051400_all.deb ...
	Unpacking lsb-base (10.2019051400) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../42-lvm2_2.03.02-3_arm64.deb ...
	Unpacking lvm2 (2.03.02-3) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../43-publicsuffix_20190415.1030-1_all.deb ...
	Unpacking publicsuffix (20190415.1030-1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../44-thin-provisioning-tools_0.7.6-2.1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.7.6-2.1) ...
	Setting up libexpat1:arm64 (2.2.6-2+deb10u1) ...
	Setting up lsb-base (10.2019051400) ...
	Setting up libkeyutils1:arm64 (1.6-6) ...
	Setting up libapparmor1:arm64 (2.13.2-10) ...
	Setting up libpsl5:arm64 (0.20.2-2) ...
	Setting up libssl1.1:arm64 (1.1.1d-0+deb10u6) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/aarch64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.36.0-2+deb10u1) ...
	Setting up krb5-locales (1.17-3+deb10u2) ...
	Setting up libldap-common (2.4.47+dfsg-3+deb10u6) ...
	Setting up libicu63:arm64 (63.1-6+deb10u1) ...
	Setting up libkrb5support0:arm64 (1.17-3+deb10u2) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2) ...
	Setting up libavahi-common-data:arm64 (0.7-4+deb10u1) ...
	Setting up libdbus-1-3:arm64 (1.12.20-0+deb10u1) ...
	Setting up dbus (1.12.20-0+deb10u1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Setting up libk5crypto3:arm64 (1.17-3+deb10u2) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up libnuma1:arm64 (2.0.12-1) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libssh2-1:arm64 (1.8.0-2.1) ...
	Setting up libkrb5-3:arm64 (1.17-3+deb10u2) ...
	Setting up libaio1:arm64 (0.3.112-3) ...
	Setting up openssl (1.1.1d-0+deb10u6) ...
	Setting up readline-common (7.0-5) ...
	Setting up publicsuffix (20190415.1030-1) ...
	Setting up libxml2:arm64 (2.9.4+dfsg1-7+deb10u2) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3+b13) ...
	Setting up libavahi-common3:arm64 (0.7-4+deb10u1) ...
	Setting up libldap-2.4-2:arm64 (2.4.47+dfsg-3+deb10u6) ...
	Setting up libnl-route-3-200:arm64 (3.4.0-1) ...
	Setting up ca-certificates (20200601~deb10u2) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/aarch64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	137 added, 0 removed; done.
	Setting up thin-provisioning-tools (0.7.6-2.1) ...
	Setting up libgssapi-krb5-2:arm64 (1.17-3+deb10u2) ...
	Setting up libavahi-client3:arm64 (0.7-4+deb10u1) ...
	Setting up libcurl3-gnutls:arm64 (7.64.0-4+deb10u2) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.155-3) ...
	Setting up libvirt0:arm64 (5.0.0-4+deb10u1) ...
	Setting up dmsetup (2:1.02.155-3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.155-3) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.02-3) ...
	Setting up dmeventd (2:1.02.155-3) ...
	Setting up lvm2 (2.03.02-3) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.28-10) ...
	Processing triggers for ca-certificates (20200601~deb10u2) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb (--install):
	 package architecture (amd64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/Users/jenkins/workspace/out/docker-machine-driver-kvm2_1.22.0-0_amd64.deb" on "debian:latest": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (66.68s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (66.31s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/Users/jenkins/workspace/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/Users/jenkins/workspace/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": exit status 1 (1m6.312610753s)

                                                
                                                
-- stdout --
	Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
	Get:2 http://deb.debian.org/debian buster InRelease [122 kB]
	Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
	Get:4 http://security.debian.org/debian-security buster/updates/main arm64 Packages [295 kB]
	Get:5 http://deb.debian.org/debian buster/main arm64 Packages [7735 kB]
	Get:6 http://deb.debian.org/debian buster-updates/main arm64 Packages [14.5 kB]
	Fetched 8284 kB in 6s (1424 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libavahi-client3 libavahi-common-data libavahi-common3 libcurl3-gnutls
	  libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1
	  libgssapi-krb5-2 libicu63 libk5crypto3 libkeyutils1 libkrb5-3
	  libkrb5support0 libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14
	  libnl-3-200 libnl-route-3-200 libnuma1 libpsl5 libreadline5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 libxml2
	  libyajl2 lsb-base lvm2 openssl publicsuffix readline-common
	  thin-provisioning-tools
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libavahi-client3 libavahi-common-data libavahi-common3 libcurl3-gnutls
	  libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1
	  libgssapi-krb5-2 libicu63 libk5crypto3 libkeyutils1 libkrb5-3
	  libkrb5support0 libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14
	  libnl-3-200 libnl-route-3-200 libnuma1 libpsl5 libreadline5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 libvirt0
	  libxml2 libyajl2 lsb-base lvm2 openssl publicsuffix readline-common
	  thin-provisioning-tools
	0 upgraded, 45 newly installed, 0 to remove and 2 not upgraded.
	Need to get 21.7 MB of archives.
	After this operation, 66.7 MB of additional disk space will be used.
	Get:1 http://deb.debian.org/debian buster/main arm64 readline-common all 7.0-5 [70.6 kB]
	Get:2 http://security.debian.org/debian-security buster/updates/main arm64 krb5-locales all 1.17-3+deb10u2 [95.5 kB]
	Get:3 http://deb.debian.org/debian buster/main arm64 libapparmor1 arm64 2.13.2-10 [93.8 kB]
	Get:4 http://deb.debian.org/debian buster/main arm64 libdbus-1-3 arm64 1.12.20-0+deb10u1 [206 kB]
	Get:5 http://security.debian.org/debian-security buster/updates/main arm64 libkrb5support0 arm64 1.17-3+deb10u2 [64.9 kB]
	Get:6 http://security.debian.org/debian-security buster/updates/main arm64 libk5crypto3 arm64 1.17-3+deb10u2 [123 kB]
	Get:7 http://deb.debian.org/debian buster/main arm64 libexpat1 arm64 2.2.6-2+deb10u1 [85.4 kB]
	Get:8 http://security.debian.org/debian-security buster/updates/main arm64 libkrb5-3 arm64 1.17-3+deb10u2 [351 kB]
	Get:9 http://deb.debian.org/debian buster/main arm64 dbus arm64 1.12.20-0+deb10u1 [227 kB]
	Get:10 http://deb.debian.org/debian buster/main arm64 libkeyutils1 arm64 1.6-6 [14.9 kB]
	Get:11 http://deb.debian.org/debian buster/main arm64 libssl1.1 arm64 1.1.1d-0+deb10u6 [1382 kB]
	Get:12 http://security.debian.org/debian-security buster/updates/main arm64 libgssapi-krb5-2 arm64 1.17-3+deb10u2 [150 kB]
	Get:13 http://deb.debian.org/debian buster/main arm64 openssl arm64 1.1.1d-0+deb10u6 [823 kB]
	Get:14 http://deb.debian.org/debian buster/main arm64 ca-certificates all 20200601~deb10u2 [166 kB]
	Get:15 http://deb.debian.org/debian buster/main arm64 dmsetup arm64 2:1.02.155-3 [83.9 kB]
	Get:16 http://deb.debian.org/debian buster/main arm64 libdevmapper1.02.1 arm64 2:1.02.155-3 [124 kB]
	Get:17 http://deb.debian.org/debian buster/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.155-3 [21.7 kB]
	Get:18 http://deb.debian.org/debian buster/main arm64 libaio1 arm64 0.3.112-3 [11.1 kB]
	Get:19 http://deb.debian.org/debian buster/main arm64 liblvm2cmd2.03 arm64 2.03.02-3 [550 kB]
	Get:20 http://deb.debian.org/debian buster/main arm64 dmeventd arm64 2:1.02.155-3 [63.9 kB]
	Get:21 http://deb.debian.org/debian buster/main arm64 libavahi-common-data arm64 0.7-4+deb10u1 [122 kB]
	Get:22 http://deb.debian.org/debian buster/main arm64 libavahi-common3 arm64 0.7-4+deb10u1 [53.4 kB]
	Get:23 http://deb.debian.org/debian buster/main arm64 libavahi-client3 arm64 0.7-4+deb10u1 [56.9 kB]
	Get:24 http://deb.debian.org/debian buster/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-1+deb10u1 [69.3 kB]
	Get:25 http://deb.debian.org/debian buster/main arm64 libsasl2-2 arm64 2.1.27+dfsg-1+deb10u1 [105 kB]
	Get:26 http://deb.debian.org/debian buster/main arm64 libldap-common all 2.4.47+dfsg-3+deb10u6 [90.0 kB]
	Get:27 http://deb.debian.org/debian buster/main arm64 libldap-2.4-2 arm64 2.4.47+dfsg-3+deb10u6 [216 kB]
	Get:28 http://deb.debian.org/debian buster/main arm64 libnghttp2-14 arm64 1.36.0-2+deb10u1 [81.9 kB]
	Get:29 http://deb.debian.org/debian buster/main arm64 libpsl5 arm64 0.20.2-2 [53.6 kB]
	Get:30 http://deb.debian.org/debian buster/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2 [55.7 kB]
	Get:31 http://deb.debian.org/debian buster/main arm64 libssh2-1 arm64 1.8.0-2.1 [135 kB]
	Get:32 http://deb.debian.org/debian buster/main arm64 libcurl3-gnutls arm64 7.64.0-4+deb10u2 [311 kB]
	Get:33 http://deb.debian.org/debian buster/main arm64 libicu63 arm64 63.1-6+deb10u1 [8151 kB]
	Get:34 http://deb.debian.org/debian buster/main arm64 libnl-3-200 arm64 3.4.0-1 [54.9 kB]
	Get:35 http://deb.debian.org/debian buster/main arm64 libnl-route-3-200 arm64 3.4.0-1 [134 kB]
	Get:36 http://deb.debian.org/debian buster/main arm64 libnuma1 arm64 2.0.12-1 [25.6 kB]
	Get:37 http://deb.debian.org/debian buster/main arm64 libreadline5 arm64 5.2+dfsg-3+b13 [113 kB]
	Get:38 http://deb.debian.org/debian buster/main arm64 libsasl2-modules arm64 2.1.27+dfsg-1+deb10u1 [102 kB]
	Get:39 http://deb.debian.org/debian buster/main arm64 libxml2 arm64 2.9.4+dfsg1-7+deb10u2 [625 kB]
	Get:40 http://deb.debian.org/debian buster/main arm64 libyajl2 arm64 2.1.0-3 [22.9 kB]
	Get:41 http://deb.debian.org/debian buster/main arm64 libvirt0 arm64 5.0.0-4+deb10u1 [4939 kB]
	Get:42 http://deb.debian.org/debian buster/main arm64 lsb-base all 10.2019051400 [28.4 kB]
	Get:43 http://deb.debian.org/debian buster/main arm64 lvm2 arm64 2.03.02-3 [1011 kB]
	Get:44 http://deb.debian.org/debian buster/main arm64 publicsuffix all 20190415.1030-1 [116 kB]
	Get:45 http://deb.debian.org/debian buster/main arm64 thin-provisioning-tools arm64 0.7.6-2.1 [318 kB]
	Fetched 21.7 MB in 1s (40.2 MB/s)
	Selecting previously unselected package readline-common.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6670 files and directories currently installed.)
	Preparing to unpack .../00-readline-common_7.0-5_all.deb ...
	Unpacking readline-common (7.0-5) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../01-libapparmor1_2.13.2-10_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.2-10) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../02-libdbus-1-3_1.12.20-0+deb10u1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.20-0+deb10u1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../03-libexpat1_2.2.6-2+deb10u1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.6-2+deb10u1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../04-dbus_1.12.20-0+deb10u1_arm64.deb ...
	Unpacking dbus (1.12.20-0+deb10u1) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../05-krb5-locales_1.17-3+deb10u2_all.deb ...
	Unpacking krb5-locales (1.17-3+deb10u2) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../06-libkeyutils1_1.6-6_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.6-6) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../07-libkrb5support0_1.17-3+deb10u2_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.17-3+deb10u2) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../08-libk5crypto3_1.17-3+deb10u2_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.17-3+deb10u2) ...
	Selecting previously unselected package libssl1.1:arm64.
	Preparing to unpack .../09-libssl1.1_1.1.1d-0+deb10u6_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1d-0+deb10u6) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../10-libkrb5-3_1.17-3+deb10u2_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.17-3+deb10u2) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../11-libgssapi-krb5-2_1.17-3+deb10u2_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.17-3+deb10u2) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../12-openssl_1.1.1d-0+deb10u6_arm64.deb ...
	Unpacking openssl (1.1.1d-0+deb10u6) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../13-ca-certificates_20200601~deb10u2_all.deb ...
	Unpacking ca-certificates (20200601~deb10u2) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../14-dmsetup_2%3a1.02.155-3_arm64.deb ...
	Unpacking dmsetup (2:1.02.155-3) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../15-libdevmapper1.02.1_2%3a1.02.155-3_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.155-3) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../16-libdevmapper-event1.02.1_2%3a1.02.155-3_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.155-3) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../17-libaio1_0.3.112-3_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-3) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../18-liblvm2cmd2.03_2.03.02-3_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.02-3) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../19-dmeventd_2%3a1.02.155-3_arm64.deb ...
	Unpacking dmeventd (2:1.02.155-3) ...
	Selecting previously unselected package libavahi-common-data:arm64.
	Preparing to unpack .../20-libavahi-common-data_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-common-data:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libavahi-common3:arm64.
	Preparing to unpack .../21-libavahi-common3_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-common3:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libavahi-client3:arm64.
	Preparing to unpack .../22-libavahi-client3_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-client3:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../23-libsasl2-modules-db_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../24-libsasl2-2_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../25-libldap-common_2.4.47+dfsg-3+deb10u6_all.deb ...
	Unpacking libldap-common (2.4.47+dfsg-3+deb10u6) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../26-libldap-2.4-2_2.4.47+dfsg-3+deb10u6_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.47+dfsg-3+deb10u6) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../27-libnghttp2-14_1.36.0-2+deb10u1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.36.0-2+deb10u1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../28-libpsl5_0.20.2-2_arm64.deb ...
	Unpacking libpsl5:arm64 (0.20.2-2) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../29-librtmp1_2.4+20151223.gitfa8646d.1-2_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2) ...
	Selecting previously unselected package libssh2-1:arm64.
	Preparing to unpack .../30-libssh2-1_1.8.0-2.1_arm64.deb ...
	Unpacking libssh2-1:arm64 (1.8.0-2.1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../31-libcurl3-gnutls_7.64.0-4+deb10u2_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.64.0-4+deb10u2) ...
	Selecting previously unselected package libicu63:arm64.
	Preparing to unpack .../32-libicu63_63.1-6+deb10u1_arm64.deb ...
	Unpacking libicu63:arm64 (63.1-6+deb10u1) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../33-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libnl-route-3-200:arm64.
	Preparing to unpack .../34-libnl-route-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-route-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../35-libnuma1_2.0.12-1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../36-libreadline5_5.2+dfsg-3+b13_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3+b13) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../37-libsasl2-modules_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../38-libxml2_2.9.4+dfsg1-7+deb10u2_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.4+dfsg1-7+deb10u2) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../39-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../40-libvirt0_5.0.0-4+deb10u1_arm64.deb ...
	Unpacking libvirt0:arm64 (5.0.0-4+deb10u1) ...
	Selecting previously unselected package lsb-base.
	Preparing to unpack .../41-lsb-base_10.2019051400_all.deb ...
	Unpacking lsb-base (10.2019051400) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../42-lvm2_2.03.02-3_arm64.deb ...
	Unpacking lvm2 (2.03.02-3) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../43-publicsuffix_20190415.1030-1_all.deb ...
	Unpacking publicsuffix (20190415.1030-1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../44-thin-provisioning-tools_0.7.6-2.1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.7.6-2.1) ...
	Setting up libexpat1:arm64 (2.2.6-2+deb10u1) ...
	Setting up lsb-base (10.2019051400) ...
	Setting up libkeyutils1:arm64 (1.6-6) ...
	Setting up libapparmor1:arm64 (2.13.2-10) ...
	Setting up libpsl5:arm64 (0.20.2-2) ...
	Setting up libssl1.1:arm64 (1.1.1d-0+deb10u6) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/aarch64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.36.0-2+deb10u1) ...
	Setting up krb5-locales (1.17-3+deb10u2) ...
	Setting up libldap-common (2.4.47+dfsg-3+deb10u6) ...
	Setting up libicu63:arm64 (63.1-6+deb10u1) ...
	Setting up libkrb5support0:arm64 (1.17-3+deb10u2) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2) ...
	Setting up libavahi-common-data:arm64 (0.7-4+deb10u1) ...
	Setting up libdbus-1-3:arm64 (1.12.20-0+deb10u1) ...
	Setting up dbus (1.12.20-0+deb10u1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Setting up libk5crypto3:arm64 (1.17-3+deb10u2) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up libnuma1:arm64 (2.0.12-1) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libssh2-1:arm64 (1.8.0-2.1) ...
	Setting up libkrb5-3:arm64 (1.17-3+deb10u2) ...
	Setting up libaio1:arm64 (0.3.112-3) ...
	Setting up openssl (1.1.1d-0+deb10u6) ...
	Setting up readline-common (7.0-5) ...
	Setting up publicsuffix (20190415.1030-1) ...
	Setting up libxml2:arm64 (2.9.4+dfsg1-7+deb10u2) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3+b13) ...
	Setting up libavahi-common3:arm64 (0.7-4+deb10u1) ...
	Setting up libldap-2.4-2:arm64 (2.4.47+dfsg-3+deb10u6) ...
	Setting up libnl-route-3-200:arm64 (3.4.0-1) ...
	Setting up ca-certificates (20200601~deb10u2) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/aarch64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	137 added, 0 removed; done.
	Setting up thin-provisioning-tools (0.7.6-2.1) ...
	Setting up libgssapi-krb5-2:arm64 (1.17-3+deb10u2) ...
	Setting up libavahi-client3:arm64 (0.7-4+deb10u1) ...
	Setting up libcurl3-gnutls:arm64 (7.64.0-4+deb10u2) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.155-3) ...
	Setting up libvirt0:arm64 (5.0.0-4+deb10u1) ...
	Setting up dmsetup (2:1.02.155-3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.155-3) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.02-3) ...
	Setting up dmeventd (2:1.02.155-3) ...
	Setting up lvm2 (2.03.02-3) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.28-10) ...
	Processing triggers for ca-certificates (20200601~deb10u2) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb (--install):
	 package architecture (amd64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/Users/jenkins/workspace/out/docker-machine-driver-kvm2_1.22.0-0_amd64.deb" on "debian:10": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (66.31s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (39.33s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/Users/jenkins/workspace/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/Users/jenkins/workspace/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": exit status 1 (39.324844226s)

                                                
                                                
-- stdout --
	Ign:1 http://deb.debian.org/debian stretch InRelease
	Get:2 http://security.debian.org/debian-security stretch/updates InRelease [53.0 kB]
	Get:3 http://deb.debian.org/debian stretch-updates InRelease [93.6 kB]
	Get:4 http://deb.debian.org/debian stretch Release [118 kB]
	Get:5 http://deb.debian.org/debian stretch Release.gpg [2410 B]
	Get:6 http://security.debian.org/debian-security stretch/updates/main arm64 Packages [686 kB]
	Get:7 http://deb.debian.org/debian stretch/main arm64 Packages [6921 kB]
	Fetched 7874 kB in 6s (1257 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  dbus dmeventd dmsetup libapparmor1 libavahi-client3 libavahi-common-data
	  libavahi-common3 libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1
	  libexpat1 libfdt1 libffi6 libgmp10 libgnutls30 libhogweed4 libicu57
	  liblvm2app2.2 liblvm2cmd2.02 libnl-3-200 libnl-route-3-200 libnuma1
	  libp11-kit0 libreadline5 libsasl2-2 libsasl2-modules libsasl2-modules-db
	  libssh2-1 libssl1.1 libtasn1-6 libxen-4.8 libxenstore3.0 libxml2 libyajl2
	  lvm2 readline-common sgml-base xml-core
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus gnutls-bin
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql
	  thin-provisioning-tools readline-doc sgml-base-doc debhelper
	The following NEW packages will be installed:
	  dbus dmeventd dmsetup libapparmor1 libavahi-client3 libavahi-common-data
	  libavahi-common3 libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1
	  libexpat1 libfdt1 libffi6 libgmp10 libgnutls30 libhogweed4 libicu57
	  liblvm2app2.2 liblvm2cmd2.02 libnl-3-200 libnl-route-3-200 libnuma1
	  libp11-kit0 libreadline5 libsasl2-2 libsasl2-modules libsasl2-modules-db
	  libssh2-1 libssl1.1 libtasn1-6 libvirt0 libxen-4.8 libxenstore3.0 libxml2
	  libyajl2 lvm2 readline-common sgml-base xml-core
	0 upgraded, 39 newly installed, 0 to remove and 3 not upgraded.
	Need to get 18.7 MB of archives.
	After this operation, 57.6 MB of additional disk space will be used.
	Get:1 http://deb.debian.org/debian stretch/main arm64 sgml-base all 1.29 [14.8 kB]
	Get:2 http://security.debian.org/debian-security stretch/updates/main arm64 libssl1.1 arm64 1.1.0l-1~deb9u3 [1125 kB]
	Get:3 http://deb.debian.org/debian stretch/main arm64 readline-common all 7.0-3 [70.4 kB]
	Get:4 http://deb.debian.org/debian stretch/main arm64 libapparmor1 arm64 2.11.0-3+deb9u2 [75.7 kB]
	Get:5 http://deb.debian.org/debian stretch/main arm64 libdbus-1-3 arm64 1.10.32-0+deb9u1 [172 kB]
	Get:6 http://deb.debian.org/debian stretch/main arm64 libexpat1 arm64 2.2.0-2+deb9u3 [70.9 kB]
	Get:7 http://deb.debian.org/debian stretch/main arm64 dbus arm64 1.10.32-0+deb9u1 [194 kB]
	Get:8 http://deb.debian.org/debian stretch/main arm64 libgmp10 arm64 2:6.1.2+dfsg-1 [213 kB]
	Get:9 http://security.debian.org/debian-security stretch/updates/main arm64 libp11-kit0 arm64 0.23.3-2+deb9u1 [91.4 kB]
	Get:10 http://deb.debian.org/debian stretch/main arm64 libhogweed4 arm64 3.3-1+b2 [128 kB]
	Get:11 http://security.debian.org/debian-security stretch/updates/main arm64 libxml2 arm64 2.9.4+dfsg1-2.2+deb9u5 [790 kB]
	Get:12 http://deb.debian.org/debian stretch/main arm64 libffi6 arm64 3.2.1-6 [19.0 kB]
	Get:13 http://deb.debian.org/debian stretch/main arm64 libtasn1-6 arm64 4.10-1.1+deb9u1 [45.7 kB]
	Get:14 http://deb.debian.org/debian stretch/main arm64 libgnutls30 arm64 3.5.8-5+deb9u5 [784 kB]
	Get:15 http://security.debian.org/debian-security stretch/updates/main arm64 libvirt0 arm64 3.0.0-4+deb9u5 [3913 kB]
	Get:16 http://deb.debian.org/debian stretch/main arm64 libsasl2-modules-db arm64 2.1.27~101-g0780600+dfsg-3+deb9u1 [66.8 kB]
	Get:17 http://deb.debian.org/debian stretch/main arm64 libsasl2-2 arm64 2.1.27~101-g0780600+dfsg-3+deb9u1 [97.8 kB]
	Get:18 http://deb.debian.org/debian stretch/main arm64 libicu57 arm64 57.1-6+deb9u4 [7553 kB]
	Get:19 http://deb.debian.org/debian stretch/main arm64 dmsetup arm64 2:1.02.137-2 [100 kB]
	Get:20 http://deb.debian.org/debian stretch/main arm64 libdevmapper1.02.1 arm64 2:1.02.137-2 [143 kB]
	Get:21 http://deb.debian.org/debian stretch/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.137-2 [40.1 kB]
	Get:22 http://deb.debian.org/debian stretch/main arm64 liblvm2cmd2.02 arm64 2.02.168-2 [566 kB]
	Get:23 http://deb.debian.org/debian stretch/main arm64 dmeventd arm64 2:1.02.137-2 [56.5 kB]
	Get:24 http://deb.debian.org/debian stretch/main arm64 libavahi-common-data arm64 0.6.32-2 [118 kB]
	Get:25 http://deb.debian.org/debian stretch/main arm64 libavahi-common3 arm64 0.6.32-2 [48.4 kB]
	Get:26 http://deb.debian.org/debian stretch/main arm64 libavahi-client3 arm64 0.6.32-2 [51.2 kB]
	Get:27 http://deb.debian.org/debian stretch/main arm64 liblvm2app2.2 arm64 2.02.168-2 [458 kB]
	Get:28 http://deb.debian.org/debian stretch/main arm64 libnl-3-200 arm64 3.2.27-2 [52.5 kB]
	Get:29 http://deb.debian.org/debian stretch/main arm64 libnl-route-3-200 arm64 3.2.27-2 [111 kB]
	Get:30 http://deb.debian.org/debian stretch/main arm64 libnuma1 arm64 2.0.11-2.1 [30.1 kB]
	Get:31 http://deb.debian.org/debian stretch/main arm64 libreadline5 arm64 5.2+dfsg-3+b1 [101 kB]
	Get:32 http://deb.debian.org/debian stretch/main arm64 libsasl2-modules arm64 2.1.27~101-g0780600+dfsg-3+deb9u1 [94.8 kB]
	Get:33 http://deb.debian.org/debian stretch/main arm64 libssh2-1 arm64 1.7.0-1+deb9u1 [127 kB]
	Get:34 http://deb.debian.org/debian stretch/main arm64 libfdt1 arm64 1.4.2-1 [12.8 kB]
	Get:35 http://deb.debian.org/debian stretch/main arm64 libxenstore3.0 arm64 4.8.5.final+shim4.10.4-1+deb9u12 [33.8 kB]
	Get:36 http://deb.debian.org/debian stretch/main arm64 libyajl2 arm64 2.1.0-2+b3 [20.7 kB]
	Get:37 http://deb.debian.org/debian stretch/main arm64 libxen-4.8 arm64 4.8.5.final+shim4.10.4-1+deb9u12 [298 kB]
	Get:38 http://deb.debian.org/debian stretch/main arm64 lvm2 arm64 2.02.168-2 [813 kB]
	Get:39 http://deb.debian.org/debian stretch/main arm64 xml-core all 0.17 [23.2 kB]
	Fetched 18.7 MB in 0s (23.2 MB/s)
	Selecting previously unselected package sgml-base.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6495 files and directories currently installed.)
	Preparing to unpack .../00-sgml-base_1.29_all.deb ...
	Unpacking sgml-base (1.29) ...
	Selecting previously unselected package libssl1.1:arm64.
	Preparing to unpack .../01-libssl1.1_1.1.0l-1~deb9u3_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.0l-1~deb9u3) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../02-readline-common_7.0-3_all.deb ...
	Unpacking readline-common (7.0-3) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../03-libapparmor1_2.11.0-3+deb9u2_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.11.0-3+deb9u2) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../04-libdbus-1-3_1.10.32-0+deb9u1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.10.32-0+deb9u1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../05-libexpat1_2.2.0-2+deb9u3_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.0-2+deb9u3) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../06-dbus_1.10.32-0+deb9u1_arm64.deb ...
	Unpacking dbus (1.10.32-0+deb9u1) ...
	Selecting previously unselected package libgmp10:arm64.
	Preparing to unpack .../07-libgmp10_2%3a6.1.2+dfsg-1_arm64.deb ...
	Unpacking libgmp10:arm64 (2:6.1.2+dfsg-1) ...
	Selecting previously unselected package libhogweed4:arm64.
	Preparing to unpack .../08-libhogweed4_3.3-1+b2_arm64.deb ...
	Unpacking libhogweed4:arm64 (3.3-1+b2) ...
	Selecting previously unselected package libffi6:arm64.
	Preparing to unpack .../09-libffi6_3.2.1-6_arm64.deb ...
	Unpacking libffi6:arm64 (3.2.1-6) ...
	Selecting previously unselected package libp11-kit0:arm64.
	Preparing to unpack .../10-libp11-kit0_0.23.3-2+deb9u1_arm64.deb ...
	Unpacking libp11-kit0:arm64 (0.23.3-2+deb9u1) ...
	Selecting previously unselected package libtasn1-6:arm64.
	Preparing to unpack .../11-libtasn1-6_4.10-1.1+deb9u1_arm64.deb ...
	Unpacking libtasn1-6:arm64 (4.10-1.1+deb9u1) ...
	Selecting previously unselected package libgnutls30:arm64.
	Preparing to unpack .../12-libgnutls30_3.5.8-5+deb9u5_arm64.deb ...
	Unpacking libgnutls30:arm64 (3.5.8-5+deb9u5) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../13-libsasl2-modules-db_2.1.27~101-g0780600+dfsg-3+deb9u1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../14-libsasl2-2_2.1.27~101-g0780600+dfsg-3+deb9u1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Selecting previously unselected package libicu57:arm64.
	Preparing to unpack .../15-libicu57_57.1-6+deb9u4_arm64.deb ...
	Unpacking libicu57:arm64 (57.1-6+deb9u4) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../16-libxml2_2.9.4+dfsg1-2.2+deb9u5_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.4+dfsg1-2.2+deb9u5) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../17-dmsetup_2%3a1.02.137-2_arm64.deb ...
	Unpacking dmsetup (2:1.02.137-2) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../18-libdevmapper1.02.1_2%3a1.02.137-2_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.137-2) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../19-libdevmapper-event1.02.1_2%3a1.02.137-2_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.137-2) ...
	Selecting previously unselected package liblvm2cmd2.02:arm64.
	Preparing to unpack .../20-liblvm2cmd2.02_2.02.168-2_arm64.deb ...
	Unpacking liblvm2cmd2.02:arm64 (2.02.168-2) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../21-dmeventd_2%3a1.02.137-2_arm64.deb ...
	Unpacking dmeventd (2:1.02.137-2) ...
	Selecting previously unselected package libavahi-common-data:arm64.
	Preparing to unpack .../22-libavahi-common-data_0.6.32-2_arm64.deb ...
	Unpacking libavahi-common-data:arm64 (0.6.32-2) ...
	Selecting previously unselected package libavahi-common3:arm64.
	Preparing to unpack .../23-libavahi-common3_0.6.32-2_arm64.deb ...
	Unpacking libavahi-common3:arm64 (0.6.32-2) ...
	Selecting previously unselected package libavahi-client3:arm64.
	Preparing to unpack .../24-libavahi-client3_0.6.32-2_arm64.deb ...
	Unpacking libavahi-client3:arm64 (0.6.32-2) ...
	Selecting previously unselected package liblvm2app2.2:arm64.
	Preparing to unpack .../25-liblvm2app2.2_2.02.168-2_arm64.deb ...
	Unpacking liblvm2app2.2:arm64 (2.02.168-2) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../26-libnl-3-200_3.2.27-2_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.2.27-2) ...
	Selecting previously unselected package libnl-route-3-200:arm64.
	Preparing to unpack .../27-libnl-route-3-200_3.2.27-2_arm64.deb ...
	Unpacking libnl-route-3-200:arm64 (3.2.27-2) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../28-libnuma1_2.0.11-2.1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.11-2.1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../29-libreadline5_5.2+dfsg-3+b1_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3+b1) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../30-libsasl2-modules_2.1.27~101-g0780600+dfsg-3+deb9u1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Selecting previously unselected package libssh2-1:arm64.
	Preparing to unpack .../31-libssh2-1_1.7.0-1+deb9u1_arm64.deb ...
	Unpacking libssh2-1:arm64 (1.7.0-1+deb9u1) ...
	Selecting previously unselected package libfdt1:arm64.
	Preparing to unpack .../32-libfdt1_1.4.2-1_arm64.deb ...
	Unpacking libfdt1:arm64 (1.4.2-1) ...
	Selecting previously unselected package libxenstore3.0:arm64.
	Preparing to unpack .../33-libxenstore3.0_4.8.5.final+shim4.10.4-1+deb9u12_arm64.deb ...
	Unpacking libxenstore3.0:arm64 (4.8.5.final+shim4.10.4-1+deb9u12) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../34-libyajl2_2.1.0-2+b3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-2+b3) ...
	Selecting previously unselected package libxen-4.8:arm64.
	Preparing to unpack .../35-libxen-4.8_4.8.5.final+shim4.10.4-1+deb9u12_arm64.deb ...
	Unpacking libxen-4.8:arm64 (4.8.5.final+shim4.10.4-1+deb9u12) ...
	Selecting previously unselected package libvirt0.
	Preparing to unpack .../36-libvirt0_3.0.0-4+deb9u5_arm64.deb ...
	Unpacking libvirt0 (3.0.0-4+deb9u5) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../37-lvm2_2.02.168-2_arm64.deb ...
	Unpacking lvm2 (2.02.168-2) ...
	Selecting previously unselected package xml-core.
	Preparing to unpack .../38-xml-core_0.17_all.deb ...
	Unpacking xml-core (0.17) ...
	Setting up readline-common (7.0-3) ...
	Setting up libexpat1:arm64 (2.2.0-2+deb9u3) ...
	Setting up libnuma1:arm64 (2.0.11-2.1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Setting up libsasl2-2:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Setting up libxenstore3.0:arm64 (4.8.5.final+shim4.10.4-1+deb9u12) ...
	Setting up sgml-base (1.29) ...
	Setting up libicu57:arm64 (57.1-6+deb9u4) ...
	Setting up libxml2:arm64 (2.9.4+dfsg1-2.2+deb9u5) ...
	Setting up libtasn1-6:arm64 (4.10-1.1+deb9u1) ...
	Setting up libyajl2:arm64 (2.1.0-2+b3) ...
	Setting up libgmp10:arm64 (2:6.1.2+dfsg-1) ...
	Setting up libssh2-1:arm64 (1.7.0-1+deb9u1) ...
	Processing triggers for libc-bin (2.24-11+deb9u4) ...
	Setting up libapparmor1:arm64 (2.11.0-3+deb9u2) ...
	Setting up libssl1.1:arm64 (1.1.0l-1~deb9u3) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.24.1 /usr/local/share/perl/5.24.1 /usr/lib/aarch64-linux-gnu/perl5/5.24 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.24 /usr/share/perl/5.24 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base .) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libffi6:arm64 (3.2.1-6) ...
	Setting up xml-core (0.17) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3+b1) ...
	Setting up libsasl2-modules:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Setting up libnl-3-200:arm64 (3.2.27-2) ...
	Setting up libdbus-1-3:arm64 (1.10.32-0+deb9u1) ...
	Setting up libavahi-common-data:arm64 (0.6.32-2) ...
	Setting up libfdt1:arm64 (1.4.2-1) ...
	Setting up libnl-route-3-200:arm64 (3.2.27-2) ...
	Setting up libxen-4.8:arm64 (4.8.5.final+shim4.10.4-1+deb9u12) ...
	Setting up libhogweed4:arm64 (3.3-1+b2) ...
	Setting up libp11-kit0:arm64 (0.23.3-2+deb9u1) ...
	Setting up libavahi-common3:arm64 (0.6.32-2) ...
	Setting up dbus (1.10.32-0+deb9u1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Setting up libgnutls30:arm64 (3.5.8-5+deb9u5) ...
	Setting up libavahi-client3:arm64 (0.6.32-2) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.137-2) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.137-2) ...
	Setting up liblvm2cmd2.02:arm64 (2.02.168-2) ...
	Setting up dmsetup (2:1.02.137-2) ...
	Setting up liblvm2app2.2:arm64 (2.02.168-2) ...
	Setting up dmeventd (2:1.02.137-2) ...
	Setting up lvm2 (2.02.168-2) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Setting up libvirt0 (3.0.0-4+deb9u5) ...
	Processing triggers for libc-bin (2.24-11+deb9u4) ...
	Processing triggers for sgml-base (1.29) ...

                                                
                                                
-- /stdout --
** stderr ** 
	WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb (--install):
	 package architecture (amd64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/Users/jenkins/workspace/out/docker-machine-driver-kvm2_1.22.0-0_amd64.deb" on "debian:9": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (39.33s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (73.05s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/Users/jenkins/workspace/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
E0812 17:37:13.866081   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/Users/jenkins/workspace/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": exit status 1 (1m13.046890549s)

                                                
                                                
-- stdout --
	Get:1 http://ports.ubuntu.com/ubuntu-ports focal InRelease [265 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease [114 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease [101 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [114 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports focal/multiverse arm64 Packages [139 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports focal/restricted arm64 Packages [1317 B]
	Get:7 http://ports.ubuntu.com/ubuntu-ports focal/universe arm64 Packages [11.1 MB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 Packages [1234 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports focal-updates/universe arm64 Packages [994 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports focal-updates/restricted arm64 Packages [3110 B]
	Get:11 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 Packages [1077 kB]
	Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/multiverse arm64 Packages [8711 B]
	Get:13 http://ports.ubuntu.com/ubuntu-ports focal-backports/main arm64 Packages [2680 B]
	Get:14 http://ports.ubuntu.com/ubuntu-ports focal-backports/universe arm64 Packages [6320 B]
	Get:15 http://ports.ubuntu.com/ubuntu-ports focal-security/universe arm64 Packages [725 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports focal-security/restricted arm64 Packages [2866 B]
	Get:17 http://ports.ubuntu.com/ubuntu-ports focal-security/multiverse arm64 Packages [3243 B]
	Get:18 http://ports.ubuntu.com/ubuntu-ports focal-security/main arm64 Packages [669 kB]
	Fetched 16.6 MB in 6s (2584 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libasn1-8-heimdal libbrotli1 libcurl3-gnutls libdbus-1-3
	  libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1 libglib2.0-0
	  libglib2.0-data libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14 libnl-3-200
	  libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1 libsasl2-2
	  libsasl2-modules libsasl2-modules-db libsqlite3-0 libssh-4 libssl1.1
	  libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix readline-common
	  shared-mime-info thin-provisioning-tools tzdata xdg-user-dirs
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libasn1-8-heimdal libbrotli1 libcurl3-gnutls libdbus-1-3
	  libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1 libglib2.0-0
	  libglib2.0-data libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14 libnl-3-200
	  libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1 libsasl2-2
	  libsasl2-modules libsasl2-modules-db libsqlite3-0 libssh-4 libssl1.1
	  libvirt0 libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix
	  readline-common shared-mime-info thin-provisioning-tools tzdata
	  xdg-user-dirs
	0 upgraded, 56 newly installed, 0 to remove and 14 not upgraded.
	Need to get 19.8 MB of archives.
	After this operation, 79.4 MB of additional disk space will be used.
	Get:1 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libssl1.1 arm64 1.1.1f-1ubuntu2.5 [1155 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 openssl arm64 1.1.1f-1ubuntu2.5 [599 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 ca-certificates all 20210119~20.04.1 [146 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libapparmor1 arm64 2.13.3-7ubuntu5.1 [32.9 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libdbus-1-3 arm64 1.12.16-2ubuntu2.1 [170 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libexpat1 arm64 2.2.9-1build1 [61.3 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 dbus arm64 1.12.16-2ubuntu2.1 [141 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libdevmapper1.02.1 arm64 2:1.02.167-1ubuntu1 [110 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 dmsetup arm64 2:1.02.167-1ubuntu1 [68.5 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libglib2.0-0 arm64 2.64.6-1~ubuntu20.04.4 [1200 kB]
	Get:11 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libglib2.0-data all 2.64.6-1~ubuntu20.04.4 [6052 B]
	Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 tzdata all 2021a-0ubuntu0.20.04 [295 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libicu66 arm64 66.1-2ubuntu2 [8357 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libsqlite3-0 arm64 3.31.1-4ubuntu0.2 [507 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libxml2 arm64 2.9.10+dfsg-5ubuntu0.20.04.1 [572 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 readline-common all 8.0-4 [53.5 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 shared-mime-info arm64 1.15-1 [429 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 xdg-user-dirs arm64 0.17-2ubuntu1 [47.6 kB]
	Get:19 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 krb5-locales all 1.17-6ubuntu4.1 [11.4 kB]
	Get:20 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libkrb5support0 arm64 1.17-6ubuntu4.1 [30.4 kB]
	Get:21 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libk5crypto3 arm64 1.17-6ubuntu4.1 [80.4 kB]
	Get:22 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libkeyutils1 arm64 1.6-6ubuntu1 [10.1 kB]
	Get:23 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libkrb5-3 arm64 1.17-6ubuntu4.1 [312 kB]
	Get:24 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libgssapi-krb5-2 arm64 1.17-6ubuntu4.1 [113 kB]
	Get:25 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnuma1 arm64 2.0.12-1 [20.5 kB]
	Get:26 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libpsl5 arm64 0.21.0-1ubuntu1 [51.3 kB]
	Get:27 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 publicsuffix all 20200303.0012-1 [111 kB]
	Get:28 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.167-1ubuntu1 [10.9 kB]
	Get:29 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libaio1 arm64 0.3.112-5 [7072 B]
	Get:30 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 liblvm2cmd2.03 arm64 2.03.07-1ubuntu1 [576 kB]
	Get:31 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 dmeventd arm64 2:1.02.167-1ubuntu1 [32.0 kB]
	Get:32 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libroken18-heimdal arm64 7.7.0+dfsg-1ubuntu1 [39.4 kB]
	Get:33 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libasn1-8-heimdal arm64 7.7.0+dfsg-1ubuntu1 [150 kB]
	Get:34 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libbrotli1 arm64 1.0.7-6ubuntu0.1 [257 kB]
	Get:35 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libheimbase1-heimdal arm64 7.7.0+dfsg-1ubuntu1 [27.9 kB]
	Get:36 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libhcrypto4-heimdal arm64 7.7.0+dfsg-1ubuntu1 [86.4 kB]
	Get:37 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libwind0-heimdal arm64 7.7.0+dfsg-1ubuntu1 [47.3 kB]
	Get:38 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libhx509-5-heimdal arm64 7.7.0+dfsg-1ubuntu1 [98.7 kB]
	Get:39 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libkrb5-26-heimdal arm64 7.7.0+dfsg-1ubuntu1 [191 kB]
	Get:40 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libheimntlm0-heimdal arm64 7.7.0+dfsg-1ubuntu1 [14.7 kB]
	Get:41 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libgssapi3-heimdal arm64 7.7.0+dfsg-1ubuntu1 [88.3 kB]
	Get:42 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-2 [15.1 kB]
	Get:43 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-2 arm64 2.1.27+dfsg-2 [48.4 kB]
	Get:44 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libldap-common all 2.4.49+dfsg-2ubuntu1.8 [16.6 kB]
	Get:45 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libldap-2.4-2 arm64 2.4.49+dfsg-2ubuntu1.8 [145 kB]
	Get:46 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnghttp2-14 arm64 1.40.0-1build1 [74.7 kB]
	Get:47 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2build1 [53.3 kB]
	Get:48 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libssh-4 arm64 0.9.3-2ubuntu2.1 [159 kB]
	Get:49 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libcurl3-gnutls arm64 7.68.0-1ubuntu2.6 [212 kB]
	Get:50 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnl-3-200 arm64 3.4.0-1 [51.5 kB]
	Get:51 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libreadline5 arm64 5.2+dfsg-3build3 [94.6 kB]
	Get:52 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-modules arm64 2.1.27+dfsg-2 [46.3 kB]
	Get:53 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libyajl2 arm64 2.1.0-3 [19.3 kB]
	Get:54 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libvirt0 arm64 6.0.0-0ubuntu8.12 [1267 kB]
	Get:55 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 lvm2 arm64 2.03.07-1ubuntu1 [951 kB]
	Get:56 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 thin-provisioning-tools arm64 0.8.5-4build1 [324 kB]
	Fetched 19.8 MB in 2s (11.0 MB/s)
	Selecting previously unselected package libssl1.1:arm64.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4120 files and directories currently installed.)
	Preparing to unpack .../00-libssl1.1_1.1.1f-1ubuntu2.5_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1f-1ubuntu2.5) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../01-openssl_1.1.1f-1ubuntu2.5_arm64.deb ...
	Unpacking openssl (1.1.1f-1ubuntu2.5) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../02-ca-certificates_20210119~20.04.1_all.deb ...
	Unpacking ca-certificates (20210119~20.04.1) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../03-libapparmor1_2.13.3-7ubuntu5.1_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.3-7ubuntu5.1) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../04-libdbus-1-3_1.12.16-2ubuntu2.1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.16-2ubuntu2.1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../05-libexpat1_2.2.9-1build1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.9-1build1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../06-dbus_1.12.16-2ubuntu2.1_arm64.deb ...
	Unpacking dbus (1.12.16-2ubuntu2.1) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../07-libdevmapper1.02.1_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../08-dmsetup_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking dmsetup (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libglib2.0-0:arm64.
	Preparing to unpack .../09-libglib2.0-0_2.64.6-1~ubuntu20.04.4_arm64.deb ...
	Unpacking libglib2.0-0:arm64 (2.64.6-1~ubuntu20.04.4) ...
	Selecting previously unselected package libglib2.0-data.
	Preparing to unpack .../10-libglib2.0-data_2.64.6-1~ubuntu20.04.4_all.deb ...
	Unpacking libglib2.0-data (2.64.6-1~ubuntu20.04.4) ...
	Selecting previously unselected package tzdata.
	Preparing to unpack .../11-tzdata_2021a-0ubuntu0.20.04_all.deb ...
	Unpacking tzdata (2021a-0ubuntu0.20.04) ...
	Selecting previously unselected package libicu66:arm64.
	Preparing to unpack .../12-libicu66_66.1-2ubuntu2_arm64.deb ...
	Unpacking libicu66:arm64 (66.1-2ubuntu2) ...
	Selecting previously unselected package libsqlite3-0:arm64.
	Preparing to unpack .../13-libsqlite3-0_3.31.1-4ubuntu0.2_arm64.deb ...
	Unpacking libsqlite3-0:arm64 (3.31.1-4ubuntu0.2) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../14-libxml2_2.9.10+dfsg-5ubuntu0.20.04.1_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.04.1) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../15-readline-common_8.0-4_all.deb ...
	Unpacking readline-common (8.0-4) ...
	Selecting previously unselected package shared-mime-info.
	Preparing to unpack .../16-shared-mime-info_1.15-1_arm64.deb ...
	Unpacking shared-mime-info (1.15-1) ...
	Selecting previously unselected package xdg-user-dirs.
	Preparing to unpack .../17-xdg-user-dirs_0.17-2ubuntu1_arm64.deb ...
	Unpacking xdg-user-dirs (0.17-2ubuntu1) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../18-krb5-locales_1.17-6ubuntu4.1_all.deb ...
	Unpacking krb5-locales (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../19-libkrb5support0_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../20-libk5crypto3_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../21-libkeyutils1_1.6-6ubuntu1_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.6-6ubuntu1) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../22-libkrb5-3_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../23-libgssapi-krb5-2_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../24-libnuma1_2.0.12-1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../25-libpsl5_0.21.0-1ubuntu1_arm64.deb ...
	Unpacking libpsl5:arm64 (0.21.0-1ubuntu1) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../26-publicsuffix_20200303.0012-1_all.deb ...
	Unpacking publicsuffix (20200303.0012-1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../27-libdevmapper-event1.02.1_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../28-libaio1_0.3.112-5_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-5) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../29-liblvm2cmd2.03_2.03.07-1ubuntu1_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.07-1ubuntu1) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../30-dmeventd_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking dmeventd (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libroken18-heimdal:arm64.
	Preparing to unpack .../31-libroken18-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libroken18-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libasn1-8-heimdal:arm64.
	Preparing to unpack .../32-libasn1-8-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libasn1-8-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libbrotli1:arm64.
	Preparing to unpack .../33-libbrotli1_1.0.7-6ubuntu0.1_arm64.deb ...
	Unpacking libbrotli1:arm64 (1.0.7-6ubuntu0.1) ...
	Selecting previously unselected package libheimbase1-heimdal:arm64.
	Preparing to unpack .../34-libheimbase1-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libheimbase1-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libhcrypto4-heimdal:arm64.
	Preparing to unpack .../35-libhcrypto4-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libhcrypto4-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libwind0-heimdal:arm64.
	Preparing to unpack .../36-libwind0-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libwind0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libhx509-5-heimdal:arm64.
	Preparing to unpack .../37-libhx509-5-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libhx509-5-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libkrb5-26-heimdal:arm64.
	Preparing to unpack .../38-libkrb5-26-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libkrb5-26-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libheimntlm0-heimdal:arm64.
	Preparing to unpack .../39-libheimntlm0-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libheimntlm0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libgssapi3-heimdal:arm64.
	Preparing to unpack .../40-libgssapi3-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libgssapi3-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../41-libsasl2-modules-db_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../42-libsasl2-2_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../43-libldap-common_2.4.49+dfsg-2ubuntu1.8_all.deb ...
	Unpacking libldap-common (2.4.49+dfsg-2ubuntu1.8) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../44-libldap-2.4-2_2.4.49+dfsg-2ubuntu1.8_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.49+dfsg-2ubuntu1.8) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../45-libnghttp2-14_1.40.0-1build1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.40.0-1build1) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../46-librtmp1_2.4+20151223.gitfa8646d.1-2build1_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build1) ...
	Selecting previously unselected package libssh-4:arm64.
	Preparing to unpack .../47-libssh-4_0.9.3-2ubuntu2.1_arm64.deb ...
	Unpacking libssh-4:arm64 (0.9.3-2ubuntu2.1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../48-libcurl3-gnutls_7.68.0-1ubuntu2.6_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.68.0-1ubuntu2.6) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../49-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../50-libreadline5_5.2+dfsg-3build3_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3build3) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../51-libsasl2-modules_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../52-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../53-libvirt0_6.0.0-0ubuntu8.12_arm64.deb ...
	Unpacking libvirt0:arm64 (6.0.0-0ubuntu8.12) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../54-lvm2_2.03.07-1ubuntu1_arm64.deb ...
	Unpacking lvm2 (2.03.07-1ubuntu1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../55-thin-provisioning-tools_0.8.5-4build1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libexpat1:arm64 (2.2.9-1build1) ...
	Setting up libkeyutils1:arm64 (1.6-6ubuntu1) ...
	Setting up libapparmor1:arm64 (2.13.3-7ubuntu5.1) ...
	Setting up libpsl5:arm64 (0.21.0-1ubuntu1) ...
	Setting up xdg-user-dirs (0.17-2ubuntu1) ...
	Setting up libglib2.0-0:arm64 (2.64.6-1~ubuntu20.04.4) ...
	No schema files found: doing nothing.
	Setting up libssl1.1:arm64 (1.1.1f-1ubuntu2.5) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libbrotli1:arm64 (1.0.7-6ubuntu0.1) ...
	Setting up libsqlite3-0:arm64 (3.31.1-4ubuntu0.2) ...
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-2) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.40.0-1build1) ...
	Setting up krb5-locales (1.17-6ubuntu4.1) ...
	Setting up libldap-common (2.4.49+dfsg-2ubuntu1.8) ...
	Setting up libkrb5support0:arm64 (1.17-6ubuntu4.1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-2) ...
	Setting up tzdata (2021a-0ubuntu0.20.04) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Configuring tzdata
	------------------
	
	Please select the geographic area in which you live. Subsequent configuration
	questions will narrow this down by presenting a list of cities, representing
	the time zones in which they are located.
	
	  1. Africa      4. Australia  7. Atlantic  10. Pacific  13. Etc
	  2. America     5. Arctic     8. Europe    11. SystemV
	  3. Antarctica  6. Asia       9. Indian    12. US
	Geographic area: 
	Use of uninitialized value $_[1] in join or string at /usr/share/perl5/Debconf/DbDriver/Stack.pm line 111.
	
	Current default time zone: '/UTC'
	Local time is now:      Fri Aug 13 00:37:00 UTC 2021.
	Universal Time is now:  Fri Aug 13 00:37:00 UTC 2021.
	Run 'dpkg-reconfigure tzdata' if you wish to change it.
	
	Use of uninitialized value $val in substitution (s///) at /usr/share/perl5/Debconf/Format/822.pm line 83, <GEN6> line 4.
	Use of uninitialized value $val in concatenation (.) or string at /usr/share/perl5/Debconf/Format/822.pm line 84, <GEN6> line 4.
	Setting up libglib2.0-data (2.64.6-1~ubuntu20.04.4) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build1) ...
	Setting up libdbus-1-3:arm64 (1.12.16-2ubuntu2.1) ...
	Setting up dbus (1.12.16-2ubuntu2.1) ...
	Setting up libk5crypto3:arm64 (1.17-6ubuntu4.1) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-2) ...
	Setting up libroken18-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Setting up libnuma1:arm64 (2.0.12-1) ...
	Setting up dmsetup (2:1.02.167-1ubuntu1) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libkrb5-3:arm64 (1.17-6ubuntu4.1) ...
	Setting up libaio1:arm64 (0.3.112-5) ...
	Setting up openssl (1.1.1f-1ubuntu2.5) ...
	Setting up readline-common (8.0-4) ...
	Setting up publicsuffix (20200303.0012-1) ...
	Setting up libheimbase1-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3build3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Setting up libicu66:arm64 (66.1-2ubuntu2) ...
	Setting up libasn1-8-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libhcrypto4-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up ca-certificates (20210119~20.04.1) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up libwind0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libgssapi-krb5-2:arm64 (1.17-6ubuntu4.1) ...
	Setting up libssh-4:arm64 (0.9.3-2ubuntu2.1) ...
	Setting up libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.04.1) ...
	Setting up libhx509-5-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up shared-mime-info (1.15-1) ...
	Setting up libkrb5-26-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libheimntlm0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libgssapi3-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libldap-2.4-2:arm64 (2.4.49+dfsg-2ubuntu1.8) ...
	Setting up libcurl3-gnutls:arm64 (7.68.0-1ubuntu2.6) ...
	Setting up libvirt0:arm64 (6.0.0-0ubuntu8.12) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.07-1ubuntu1) ...
	Setting up dmeventd (2:1.02.167-1ubuntu1) ...
	Setting up lvm2 (2.03.07-1ubuntu1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
	Processing triggers for ca-certificates (20210119~20.04.1) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb (--install):
	 package architecture (amd64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/Users/jenkins/workspace/out/docker-machine-driver-kvm2_1.22.0-0_amd64.deb" on "ubuntu:latest": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (73.05s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (68.19s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/Users/jenkins/workspace/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/Users/jenkins/workspace/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": exit status 1 (1m8.181967051s)

                                                
                                                
-- stdout --
	Get:1 http://ports.ubuntu.com/ubuntu-ports groovy InRelease [267 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports groovy-updates InRelease [115 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports groovy-backports InRelease [101 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports groovy-security InRelease [110 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports groovy/universe arm64 Packages [15.8 MB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 Packages [1727 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports groovy/restricted arm64 Packages [3561 B]
	Get:8 http://ports.ubuntu.com/ubuntu-ports groovy/multiverse arm64 Packages [208 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports groovy-updates/universe arm64 Packages [529 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports groovy-updates/restricted arm64 Packages [3996 B]
	Get:11 http://ports.ubuntu.com/ubuntu-ports groovy-updates/multiverse arm64 Packages [3244 B]
	Get:12 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 Packages [426 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports groovy-backports/universe arm64 Packages [6219 B]
	Get:14 http://ports.ubuntu.com/ubuntu-ports groovy-backports/main arm64 Packages [2690 B]
	Get:15 http://ports.ubuntu.com/ubuntu-ports groovy-security/restricted arm64 Packages [2877 B]
	Get:16 http://ports.ubuntu.com/ubuntu-ports groovy-security/main arm64 Packages [260 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports groovy-security/universe arm64 Packages [398 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports groovy-security/multiverse arm64 Packages [669 B]
	Fetched 20.0 MB in 6s (3288 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup libaio1 libapparmor1 libasn1-8-heimdal
	  libbrotli1 libcurl3-gnutls libdbus-1-3 libdevmapper-event1.02.1
	  libdevmapper1.02.1 libexpat1 libglib2.0-0 libglib2.0-data libgssapi3-heimdal
	  libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
	  libhx509-5-heimdal libicu67 libkrb5-26-heimdal libldap-2.4-2 libldap-common
	  liblvm2cmd2.03 libnghttp2-14 libnl-3-200 libnuma1 libpsl5 libreadline5
	  libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
	  libsqlite3-0 libssh-4 libwind0-heimdal libxml2 libyajl2 lvm2 openssl
	  publicsuffix readline-common shared-mime-info thin-provisioning-tools
	  xdg-user-dirs
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus libsasl2-modules-gssapi-mit
	  | libsasl2-modules-gssapi-heimdal libsasl2-modules-ldap libsasl2-modules-otp
	  libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup libaio1 libapparmor1 libasn1-8-heimdal
	  libbrotli1 libcurl3-gnutls libdbus-1-3 libdevmapper-event1.02.1
	  libdevmapper1.02.1 libexpat1 libglib2.0-0 libglib2.0-data libgssapi3-heimdal
	  libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
	  libhx509-5-heimdal libicu67 libkrb5-26-heimdal libldap-2.4-2 libldap-common
	  liblvm2cmd2.03 libnghttp2-14 libnl-3-200 libnuma1 libpsl5 libreadline5
	  libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
	  libsqlite3-0 libssh-4 libvirt0 libwind0-heimdal libxml2 libyajl2 lvm2
	  openssl publicsuffix readline-common shared-mime-info
	  thin-provisioning-tools xdg-user-dirs
	0 upgraded, 48 newly installed, 0 to remove and 7 not upgraded.
	Need to get 18.0 MB of archives.
	After this operation, 70.6 MB of additional disk space will be used.
	Get:1 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 openssl arm64 1.1.1f-1ubuntu4.4 [600 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 ca-certificates all 20210119~20.10.1 [147 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libapparmor1 arm64 3.0.0-0ubuntu1 [35.2 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libdbus-1-3 arm64 1.12.20-1ubuntu1 [173 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libexpat1 arm64 2.2.9-1build1 [61.3 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 dbus arm64 1.12.20-1ubuntu1 [143 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libdevmapper1.02.1 arm64 2:1.02.167-1ubuntu3 [110 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 dmsetup arm64 2:1.02.167-1ubuntu3 [68.5 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libglib2.0-0 arm64 2.66.1-2ubuntu0.2 [1215 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libglib2.0-data all 2.66.1-2ubuntu0.2 [6440 B]
	Get:11 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libicu67 arm64 67.1-4 [8461 kB]
	Get:12 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libsqlite3-0 arm64 3.33.0-1ubuntu0.1 [540 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libxml2 arm64 2.9.10+dfsg-5ubuntu0.20.10.2 [559 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 readline-common all 8.0-4 [53.5 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 shared-mime-info arm64 2.0-1 [427 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 xdg-user-dirs arm64 0.17-2ubuntu2 [47.6 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libnuma1 arm64 2.0.12-1build1 [20.6 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libpsl5 arm64 0.21.0-1.1ubuntu1 [52.0 kB]
	Get:19 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 publicsuffix all 20200729.1725-1 [113 kB]
	Get:20 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.167-1ubuntu3 [10.9 kB]
	Get:21 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libaio1 arm64 0.3.112-8 [7384 B]
	Get:22 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 liblvm2cmd2.03 arm64 2.03.07-1ubuntu3 [575 kB]
	Get:23 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 dmeventd arm64 2:1.02.167-1ubuntu3 [32.0 kB]
	Get:24 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libroken18-heimdal arm64 7.7.0+dfsg-2 [39.4 kB]
	Get:25 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libasn1-8-heimdal arm64 7.7.0+dfsg-2 [150 kB]
	Get:26 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libbrotli1 arm64 1.0.9-2 [267 kB]
	Get:27 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libheimbase1-heimdal arm64 7.7.0+dfsg-2 [27.9 kB]
	Get:28 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libhcrypto4-heimdal arm64 7.7.0+dfsg-2 [84.8 kB]
	Get:29 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libwind0-heimdal arm64 7.7.0+dfsg-2 [47.2 kB]
	Get:30 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libhx509-5-heimdal arm64 7.7.0+dfsg-2 [98.6 kB]
	Get:31 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libkrb5-26-heimdal arm64 7.7.0+dfsg-2 [192 kB]
	Get:32 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libheimntlm0-heimdal arm64 7.7.0+dfsg-2 [14.8 kB]
	Get:33 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libgssapi3-heimdal arm64 7.7.0+dfsg-2 [88.4 kB]
	Get:34 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-2ubuntu1 [14.9 kB]
	Get:35 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libsasl2-2 arm64 2.1.27+dfsg-2ubuntu1 [48.4 kB]
	Get:36 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libldap-2.4-2 arm64 2.4.53+dfsg-1ubuntu1.4 [147 kB]
	Get:37 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libnghttp2-14 arm64 1.41.0-3 [64.6 kB]
	Get:38 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2build2 [53.1 kB]
	Get:39 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libssh-4 arm64 0.9.4-1ubuntu3 [161 kB]
	Get:40 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libcurl3-gnutls arm64 7.68.0-1ubuntu4.3 [212 kB]
	Get:41 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libldap-common all 2.4.53+dfsg-1ubuntu1.4 [17.7 kB]
	Get:42 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libnl-3-200 arm64 3.4.0-1 [51.5 kB]
	Get:43 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libreadline5 arm64 5.2+dfsg-3build3 [94.6 kB]
	Get:44 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libsasl2-modules arm64 2.1.27+dfsg-2ubuntu1 [46.2 kB]
	Get:45 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libyajl2 arm64 2.1.0-3 [19.3 kB]
	Get:46 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libvirt0 arm64 6.6.0-1ubuntu3.5 [1348 kB]
	Get:47 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 lvm2 arm64 2.03.07-1ubuntu3 [951 kB]
	Get:48 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 thin-provisioning-tools arm64 0.8.5-4build1 [324 kB]
	Fetched 18.0 MB in 2s (11.2 MB/s)
	Selecting previously unselected package openssl.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4258 files and directories currently installed.)
	Preparing to unpack .../00-openssl_1.1.1f-1ubuntu4.4_arm64.deb ...
	Unpacking openssl (1.1.1f-1ubuntu4.4) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../01-ca-certificates_20210119~20.10.1_all.deb ...
	Unpacking ca-certificates (20210119~20.10.1) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../02-libapparmor1_3.0.0-0ubuntu1_arm64.deb ...
	Unpacking libapparmor1:arm64 (3.0.0-0ubuntu1) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../03-libdbus-1-3_1.12.20-1ubuntu1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.20-1ubuntu1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../04-libexpat1_2.2.9-1build1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.9-1build1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../05-dbus_1.12.20-1ubuntu1_arm64.deb ...
	Unpacking dbus (1.12.20-1ubuntu1) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../06-libdevmapper1.02.1_2%3a1.02.167-1ubuntu3_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu3) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../07-dmsetup_2%3a1.02.167-1ubuntu3_arm64.deb ...
	Unpacking dmsetup (2:1.02.167-1ubuntu3) ...
	Selecting previously unselected package libglib2.0-0:arm64.
	Preparing to unpack .../08-libglib2.0-0_2.66.1-2ubuntu0.2_arm64.deb ...
	Unpacking libglib2.0-0:arm64 (2.66.1-2ubuntu0.2) ...
	Selecting previously unselected package libglib2.0-data.
	Preparing to unpack .../09-libglib2.0-data_2.66.1-2ubuntu0.2_all.deb ...
	Unpacking libglib2.0-data (2.66.1-2ubuntu0.2) ...
	Selecting previously unselected package libicu67:arm64.
	Preparing to unpack .../10-libicu67_67.1-4_arm64.deb ...
	Unpacking libicu67:arm64 (67.1-4) ...
	Selecting previously unselected package libsqlite3-0:arm64.
	Preparing to unpack .../11-libsqlite3-0_3.33.0-1ubuntu0.1_arm64.deb ...
	Unpacking libsqlite3-0:arm64 (3.33.0-1ubuntu0.1) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../12-libxml2_2.9.10+dfsg-5ubuntu0.20.10.2_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.10.2) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../13-readline-common_8.0-4_all.deb ...
	Unpacking readline-common (8.0-4) ...
	Selecting previously unselected package shared-mime-info.
	Preparing to unpack .../14-shared-mime-info_2.0-1_arm64.deb ...
	Unpacking shared-mime-info (2.0-1) ...
	Selecting previously unselected package xdg-user-dirs.
	Preparing to unpack .../15-xdg-user-dirs_0.17-2ubuntu2_arm64.deb ...
	Unpacking xdg-user-dirs (0.17-2ubuntu2) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../16-libnuma1_2.0.12-1build1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1build1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../17-libpsl5_0.21.0-1.1ubuntu1_arm64.deb ...
	Unpacking libpsl5:arm64 (0.21.0-1.1ubuntu1) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../18-publicsuffix_20200729.1725-1_all.deb ...
	Unpacking publicsuffix (20200729.1725-1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../19-libdevmapper-event1.02.1_2%3a1.02.167-1ubuntu3_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu3) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../20-libaio1_0.3.112-8_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-8) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../21-liblvm2cmd2.03_2.03.07-1ubuntu3_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.07-1ubuntu3) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../22-dmeventd_2%3a1.02.167-1ubuntu3_arm64.deb ...
	Unpacking dmeventd (2:1.02.167-1ubuntu3) ...
	Selecting previously unselected package libroken18-heimdal:arm64.
	Preparing to unpack .../23-libroken18-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libroken18-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libasn1-8-heimdal:arm64.
	Preparing to unpack .../24-libasn1-8-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libasn1-8-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libbrotli1:arm64.
	Preparing to unpack .../25-libbrotli1_1.0.9-2_arm64.deb ...
	Unpacking libbrotli1:arm64 (1.0.9-2) ...
	Selecting previously unselected package libheimbase1-heimdal:arm64.
	Preparing to unpack .../26-libheimbase1-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libheimbase1-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libhcrypto4-heimdal:arm64.
	Preparing to unpack .../27-libhcrypto4-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libhcrypto4-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libwind0-heimdal:arm64.
	Preparing to unpack .../28-libwind0-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libwind0-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libhx509-5-heimdal:arm64.
	Preparing to unpack .../29-libhx509-5-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libhx509-5-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libkrb5-26-heimdal:arm64.
	Preparing to unpack .../30-libkrb5-26-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libkrb5-26-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libheimntlm0-heimdal:arm64.
	Preparing to unpack .../31-libheimntlm0-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libheimntlm0-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libgssapi3-heimdal:arm64.
	Preparing to unpack .../32-libgssapi3-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libgssapi3-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../33-libsasl2-modules-db_2.1.27+dfsg-2ubuntu1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../34-libsasl2-2_2.1.27+dfsg-2ubuntu1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../35-libldap-2.4-2_2.4.53+dfsg-1ubuntu1.4_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.53+dfsg-1ubuntu1.4) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../36-libnghttp2-14_1.41.0-3_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.41.0-3) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../37-librtmp1_2.4+20151223.gitfa8646d.1-2build2_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build2) ...
	Selecting previously unselected package libssh-4:arm64.
	Preparing to unpack .../38-libssh-4_0.9.4-1ubuntu3_arm64.deb ...
	Unpacking libssh-4:arm64 (0.9.4-1ubuntu3) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../39-libcurl3-gnutls_7.68.0-1ubuntu4.3_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.68.0-1ubuntu4.3) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../40-libldap-common_2.4.53+dfsg-1ubuntu1.4_all.deb ...
	Unpacking libldap-common (2.4.53+dfsg-1ubuntu1.4) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../41-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../42-libreadline5_5.2+dfsg-3build3_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3build3) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../43-libsasl2-modules_2.1.27+dfsg-2ubuntu1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../44-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../45-libvirt0_6.6.0-1ubuntu3.5_arm64.deb ...
	Unpacking libvirt0:arm64 (6.6.0-1ubuntu3.5) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../46-lvm2_2.03.07-1ubuntu3_arm64.deb ...
	Unpacking lvm2 (2.03.07-1ubuntu3) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../47-thin-provisioning-tools_0.8.5-4build1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libexpat1:arm64 (2.2.9-1build1) ...
	Setting up libapparmor1:arm64 (3.0.0-0ubuntu1) ...
	Setting up libpsl5:arm64 (0.21.0-1.1ubuntu1) ...
	Setting up libicu67:arm64 (67.1-4) ...
	Setting up xdg-user-dirs (0.17-2ubuntu2) ...
	Setting up libglib2.0-0:arm64 (2.66.1-2ubuntu0.2) ...
	No schema files found: doing nothing.
	Setting up libbrotli1:arm64 (1.0.9-2) ...
	Setting up libsqlite3-0:arm64 (3.33.0-1ubuntu0.1) ...
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.41.0-3) ...
	Setting up libldap-common (2.4.53+dfsg-1ubuntu1.4) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Setting up libglib2.0-data (2.66.1-2ubuntu0.2) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build2) ...
	Setting up libdbus-1-3:arm64 (1.12.20-1ubuntu1) ...
	Setting up dbus (1.12.20-1ubuntu1) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Setting up libssh-4:arm64 (0.9.4-1ubuntu3) ...
	Setting up libroken18-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu3) ...
	Setting up libnuma1:arm64 (2.0.12-1build1) ...
	Setting up dmsetup (2:1.02.167-1ubuntu3) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libaio1:arm64 (0.3.112-8) ...
	Setting up openssl (1.1.1f-1ubuntu4.4) ...
	Setting up readline-common (8.0-4) ...
	Setting up publicsuffix (20200729.1725-1) ...
	Setting up libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.10.2) ...
	Setting up libheimbase1-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3build3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu3) ...
	Setting up libasn1-8-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libhcrypto4-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up ca-certificates (20210119~20.10.1) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.3 /usr/local/share/perl/5.30.3 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl-base /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up libwind0-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up shared-mime-info (2.0-1) ...
	Setting up thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libhx509-5-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libkrb5-26-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libheimntlm0-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libgssapi3-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libldap-2.4-2:arm64 (2.4.53+dfsg-1ubuntu1.4) ...
	Setting up libcurl3-gnutls:arm64 (7.68.0-1ubuntu4.3) ...
	Setting up libvirt0:arm64 (6.6.0-1ubuntu3.5) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.07-1ubuntu3) ...
	Setting up dmeventd (2:1.02.167-1ubuntu3) ...
	Setting up lvm2 (2.03.07-1ubuntu3) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.32-0ubuntu3) ...
	Processing triggers for ca-certificates (20210119~20.10.1) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb (--install):
	 package architecture (amd64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/Users/jenkins/workspace/out/docker-machine-driver-kvm2_1.22.0-0_amd64.deb" on "ubuntu:20.10": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (68.19s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (72.55s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/Users/jenkins/workspace/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/Users/jenkins/workspace/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": exit status 1 (1m12.551796071s)

                                                
                                                
-- stdout --
	Get:1 http://ports.ubuntu.com/ubuntu-ports focal InRelease [265 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease [114 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease [101 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [114 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 Packages [1234 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports focal/multiverse arm64 Packages [139 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports focal/universe arm64 Packages [11.1 MB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports focal/restricted arm64 Packages [1317 B]
	Get:9 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 Packages [1077 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports focal-updates/multiverse arm64 Packages [8711 B]
	Get:11 http://ports.ubuntu.com/ubuntu-ports focal-updates/restricted arm64 Packages [3110 B]
	Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/universe arm64 Packages [994 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports focal-backports/universe arm64 Packages [6320 B]
	Get:14 http://ports.ubuntu.com/ubuntu-ports focal-backports/main arm64 Packages [2680 B]
	Get:15 http://ports.ubuntu.com/ubuntu-ports focal-security/multiverse arm64 Packages [3243 B]
	Get:16 http://ports.ubuntu.com/ubuntu-ports focal-security/main arm64 Packages [669 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports focal-security/universe arm64 Packages [725 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports focal-security/restricted arm64 Packages [2866 B]
	Fetched 16.6 MB in 6s (2608 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libasn1-8-heimdal libbrotli1 libcurl3-gnutls libdbus-1-3
	  libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1 libglib2.0-0
	  libglib2.0-data libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14 libnl-3-200
	  libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1 libsasl2-2
	  libsasl2-modules libsasl2-modules-db libsqlite3-0 libssh-4 libssl1.1
	  libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix readline-common
	  shared-mime-info thin-provisioning-tools tzdata xdg-user-dirs
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libasn1-8-heimdal libbrotli1 libcurl3-gnutls libdbus-1-3
	  libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1 libglib2.0-0
	  libglib2.0-data libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14 libnl-3-200
	  libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1 libsasl2-2
	  libsasl2-modules libsasl2-modules-db libsqlite3-0 libssh-4 libssl1.1
	  libvirt0 libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix
	  readline-common shared-mime-info thin-provisioning-tools tzdata
	  xdg-user-dirs
	0 upgraded, 56 newly installed, 0 to remove and 14 not upgraded.
	Need to get 19.8 MB of archives.
	After this operation, 79.4 MB of additional disk space will be used.
	Get:1 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libssl1.1 arm64 1.1.1f-1ubuntu2.5 [1155 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 openssl arm64 1.1.1f-1ubuntu2.5 [599 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 ca-certificates all 20210119~20.04.1 [146 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libapparmor1 arm64 2.13.3-7ubuntu5.1 [32.9 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libdbus-1-3 arm64 1.12.16-2ubuntu2.1 [170 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libexpat1 arm64 2.2.9-1build1 [61.3 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 dbus arm64 1.12.16-2ubuntu2.1 [141 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libdevmapper1.02.1 arm64 2:1.02.167-1ubuntu1 [110 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 dmsetup arm64 2:1.02.167-1ubuntu1 [68.5 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libglib2.0-0 arm64 2.64.6-1~ubuntu20.04.4 [1200 kB]
	Get:11 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libglib2.0-data all 2.64.6-1~ubuntu20.04.4 [6052 B]
	Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 tzdata all 2021a-0ubuntu0.20.04 [295 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libicu66 arm64 66.1-2ubuntu2 [8357 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libsqlite3-0 arm64 3.31.1-4ubuntu0.2 [507 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libxml2 arm64 2.9.10+dfsg-5ubuntu0.20.04.1 [572 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 readline-common all 8.0-4 [53.5 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 shared-mime-info arm64 1.15-1 [429 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 xdg-user-dirs arm64 0.17-2ubuntu1 [47.6 kB]
	Get:19 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 krb5-locales all 1.17-6ubuntu4.1 [11.4 kB]
	Get:20 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libkrb5support0 arm64 1.17-6ubuntu4.1 [30.4 kB]
	Get:21 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libk5crypto3 arm64 1.17-6ubuntu4.1 [80.4 kB]
	Get:22 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libkeyutils1 arm64 1.6-6ubuntu1 [10.1 kB]
	Get:23 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libkrb5-3 arm64 1.17-6ubuntu4.1 [312 kB]
	Get:24 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libgssapi-krb5-2 arm64 1.17-6ubuntu4.1 [113 kB]
	Get:25 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnuma1 arm64 2.0.12-1 [20.5 kB]
	Get:26 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libpsl5 arm64 0.21.0-1ubuntu1 [51.3 kB]
	Get:27 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 publicsuffix all 20200303.0012-1 [111 kB]
	Get:28 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.167-1ubuntu1 [10.9 kB]
	Get:29 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libaio1 arm64 0.3.112-5 [7072 B]
	Get:30 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 liblvm2cmd2.03 arm64 2.03.07-1ubuntu1 [576 kB]
	Get:31 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 dmeventd arm64 2:1.02.167-1ubuntu1 [32.0 kB]
	Get:32 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libroken18-heimdal arm64 7.7.0+dfsg-1ubuntu1 [39.4 kB]
	Get:33 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libasn1-8-heimdal arm64 7.7.0+dfsg-1ubuntu1 [150 kB]
	Get:34 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libbrotli1 arm64 1.0.7-6ubuntu0.1 [257 kB]
	Get:35 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libheimbase1-heimdal arm64 7.7.0+dfsg-1ubuntu1 [27.9 kB]
	Get:36 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libhcrypto4-heimdal arm64 7.7.0+dfsg-1ubuntu1 [86.4 kB]
	Get:37 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libwind0-heimdal arm64 7.7.0+dfsg-1ubuntu1 [47.3 kB]
	Get:38 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libhx509-5-heimdal arm64 7.7.0+dfsg-1ubuntu1 [98.7 kB]
	Get:39 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libkrb5-26-heimdal arm64 7.7.0+dfsg-1ubuntu1 [191 kB]
	Get:40 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libheimntlm0-heimdal arm64 7.7.0+dfsg-1ubuntu1 [14.7 kB]
	Get:41 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libgssapi3-heimdal arm64 7.7.0+dfsg-1ubuntu1 [88.3 kB]
	Get:42 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-2 [15.1 kB]
	Get:43 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-2 arm64 2.1.27+dfsg-2 [48.4 kB]
	Get:44 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libldap-common all 2.4.49+dfsg-2ubuntu1.8 [16.6 kB]
	Get:45 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libldap-2.4-2 arm64 2.4.49+dfsg-2ubuntu1.8 [145 kB]
	Get:46 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnghttp2-14 arm64 1.40.0-1build1 [74.7 kB]
	Get:47 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2build1 [53.3 kB]
	Get:48 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libssh-4 arm64 0.9.3-2ubuntu2.1 [159 kB]
	Get:49 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libcurl3-gnutls arm64 7.68.0-1ubuntu2.6 [212 kB]
	Get:50 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnl-3-200 arm64 3.4.0-1 [51.5 kB]
	Get:51 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libreadline5 arm64 5.2+dfsg-3build3 [94.6 kB]
	Get:52 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-modules arm64 2.1.27+dfsg-2 [46.3 kB]
	Get:53 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libyajl2 arm64 2.1.0-3 [19.3 kB]
	Get:54 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libvirt0 arm64 6.0.0-0ubuntu8.12 [1267 kB]
	Get:55 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 lvm2 arm64 2.03.07-1ubuntu1 [951 kB]
	Get:56 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 thin-provisioning-tools arm64 0.8.5-4build1 [324 kB]
	Fetched 19.8 MB in 2s (11.5 MB/s)
	Selecting previously unselected package libssl1.1:arm64.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4120 files and directories currently installed.)
	Preparing to unpack .../00-libssl1.1_1.1.1f-1ubuntu2.5_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1f-1ubuntu2.5) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../01-openssl_1.1.1f-1ubuntu2.5_arm64.deb ...
	Unpacking openssl (1.1.1f-1ubuntu2.5) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../02-ca-certificates_20210119~20.04.1_all.deb ...
	Unpacking ca-certificates (20210119~20.04.1) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../03-libapparmor1_2.13.3-7ubuntu5.1_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.3-7ubuntu5.1) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../04-libdbus-1-3_1.12.16-2ubuntu2.1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.16-2ubuntu2.1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../05-libexpat1_2.2.9-1build1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.9-1build1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../06-dbus_1.12.16-2ubuntu2.1_arm64.deb ...
	Unpacking dbus (1.12.16-2ubuntu2.1) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../07-libdevmapper1.02.1_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../08-dmsetup_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking dmsetup (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libglib2.0-0:arm64.
	Preparing to unpack .../09-libglib2.0-0_2.64.6-1~ubuntu20.04.4_arm64.deb ...
	Unpacking libglib2.0-0:arm64 (2.64.6-1~ubuntu20.04.4) ...
	Selecting previously unselected package libglib2.0-data.
	Preparing to unpack .../10-libglib2.0-data_2.64.6-1~ubuntu20.04.4_all.deb ...
	Unpacking libglib2.0-data (2.64.6-1~ubuntu20.04.4) ...
	Selecting previously unselected package tzdata.
	Preparing to unpack .../11-tzdata_2021a-0ubuntu0.20.04_all.deb ...
	Unpacking tzdata (2021a-0ubuntu0.20.04) ...
	Selecting previously unselected package libicu66:arm64.
	Preparing to unpack .../12-libicu66_66.1-2ubuntu2_arm64.deb ...
	Unpacking libicu66:arm64 (66.1-2ubuntu2) ...
	Selecting previously unselected package libsqlite3-0:arm64.
	Preparing to unpack .../13-libsqlite3-0_3.31.1-4ubuntu0.2_arm64.deb ...
	Unpacking libsqlite3-0:arm64 (3.31.1-4ubuntu0.2) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../14-libxml2_2.9.10+dfsg-5ubuntu0.20.04.1_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.04.1) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../15-readline-common_8.0-4_all.deb ...
	Unpacking readline-common (8.0-4) ...
	Selecting previously unselected package shared-mime-info.
	Preparing to unpack .../16-shared-mime-info_1.15-1_arm64.deb ...
	Unpacking shared-mime-info (1.15-1) ...
	Selecting previously unselected package xdg-user-dirs.
	Preparing to unpack .../17-xdg-user-dirs_0.17-2ubuntu1_arm64.deb ...
	Unpacking xdg-user-dirs (0.17-2ubuntu1) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../18-krb5-locales_1.17-6ubuntu4.1_all.deb ...
	Unpacking krb5-locales (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../19-libkrb5support0_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../20-libk5crypto3_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../21-libkeyutils1_1.6-6ubuntu1_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.6-6ubuntu1) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../22-libkrb5-3_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../23-libgssapi-krb5-2_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../24-libnuma1_2.0.12-1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../25-libpsl5_0.21.0-1ubuntu1_arm64.deb ...
	Unpacking libpsl5:arm64 (0.21.0-1ubuntu1) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../26-publicsuffix_20200303.0012-1_all.deb ...
	Unpacking publicsuffix (20200303.0012-1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../27-libdevmapper-event1.02.1_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../28-libaio1_0.3.112-5_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-5) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../29-liblvm2cmd2.03_2.03.07-1ubuntu1_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.07-1ubuntu1) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../30-dmeventd_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking dmeventd (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libroken18-heimdal:arm64.
	Preparing to unpack .../31-libroken18-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libroken18-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libasn1-8-heimdal:arm64.
	Preparing to unpack .../32-libasn1-8-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libasn1-8-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libbrotli1:arm64.
	Preparing to unpack .../33-libbrotli1_1.0.7-6ubuntu0.1_arm64.deb ...
	Unpacking libbrotli1:arm64 (1.0.7-6ubuntu0.1) ...
	Selecting previously unselected package libheimbase1-heimdal:arm64.
	Preparing to unpack .../34-libheimbase1-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libheimbase1-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libhcrypto4-heimdal:arm64.
	Preparing to unpack .../35-libhcrypto4-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libhcrypto4-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libwind0-heimdal:arm64.
	Preparing to unpack .../36-libwind0-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libwind0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libhx509-5-heimdal:arm64.
	Preparing to unpack .../37-libhx509-5-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libhx509-5-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libkrb5-26-heimdal:arm64.
	Preparing to unpack .../38-libkrb5-26-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libkrb5-26-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libheimntlm0-heimdal:arm64.
	Preparing to unpack .../39-libheimntlm0-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libheimntlm0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libgssapi3-heimdal:arm64.
	Preparing to unpack .../40-libgssapi3-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libgssapi3-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../41-libsasl2-modules-db_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../42-libsasl2-2_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../43-libldap-common_2.4.49+dfsg-2ubuntu1.8_all.deb ...
	Unpacking libldap-common (2.4.49+dfsg-2ubuntu1.8) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../44-libldap-2.4-2_2.4.49+dfsg-2ubuntu1.8_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.49+dfsg-2ubuntu1.8) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../45-libnghttp2-14_1.40.0-1build1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.40.0-1build1) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../46-librtmp1_2.4+20151223.gitfa8646d.1-2build1_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build1) ...
	Selecting previously unselected package libssh-4:arm64.
	Preparing to unpack .../47-libssh-4_0.9.3-2ubuntu2.1_arm64.deb ...
	Unpacking libssh-4:arm64 (0.9.3-2ubuntu2.1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../48-libcurl3-gnutls_7.68.0-1ubuntu2.6_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.68.0-1ubuntu2.6) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../49-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../50-libreadline5_5.2+dfsg-3build3_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3build3) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../51-libsasl2-modules_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../52-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../53-libvirt0_6.0.0-0ubuntu8.12_arm64.deb ...
	Unpacking libvirt0:arm64 (6.0.0-0ubuntu8.12) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../54-lvm2_2.03.07-1ubuntu1_arm64.deb ...
	Unpacking lvm2 (2.03.07-1ubuntu1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../55-thin-provisioning-tools_0.8.5-4build1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libexpat1:arm64 (2.2.9-1build1) ...
	Setting up libkeyutils1:arm64 (1.6-6ubuntu1) ...
	Setting up libapparmor1:arm64 (2.13.3-7ubuntu5.1) ...
	Setting up libpsl5:arm64 (0.21.0-1ubuntu1) ...
	Setting up xdg-user-dirs (0.17-2ubuntu1) ...
	Setting up libglib2.0-0:arm64 (2.64.6-1~ubuntu20.04.4) ...
	No schema files found: doing nothing.
	Setting up libssl1.1:arm64 (1.1.1f-1ubuntu2.5) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libbrotli1:arm64 (1.0.7-6ubuntu0.1) ...
	Setting up libsqlite3-0:arm64 (3.31.1-4ubuntu0.2) ...
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-2) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.40.0-1build1) ...
	Setting up krb5-locales (1.17-6ubuntu4.1) ...
	Setting up libldap-common (2.4.49+dfsg-2ubuntu1.8) ...
	Setting up libkrb5support0:arm64 (1.17-6ubuntu4.1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-2) ...
	Setting up tzdata (2021a-0ubuntu0.20.04) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Configuring tzdata
	------------------
	
	Please select the geographic area in which you live. Subsequent configuration
	questions will narrow this down by presenting a list of cities, representing
	the time zones in which they are located.
	
	  1. Africa      4. Australia  7. Atlantic  10. Pacific  13. Etc
	  2. America     5. Arctic     8. Europe    11. SystemV
	  3. Antarctica  6. Asia       9. Indian    12. US
	Geographic area: 
	Use of uninitialized value $_[1] in join or string at /usr/share/perl5/Debconf/DbDriver/Stack.pm line 111.
	
	Current default time zone: '/UTC'
	Local time is now:      Fri Aug 13 00:39:21 UTC 2021.
	Universal Time is now:  Fri Aug 13 00:39:21 UTC 2021.
	Run 'dpkg-reconfigure tzdata' if you wish to change it.
	
	Use of uninitialized value $val in substitution (s///) at /usr/share/perl5/Debconf/Format/822.pm line 83, <GEN6> line 4.
	Use of uninitialized value $val in concatenation (.) or string at /usr/share/perl5/Debconf/Format/822.pm line 84, <GEN6> line 4.
	Setting up libglib2.0-data (2.64.6-1~ubuntu20.04.4) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build1) ...
	Setting up libdbus-1-3:arm64 (1.12.16-2ubuntu2.1) ...
	Setting up dbus (1.12.16-2ubuntu2.1) ...
	Setting up libk5crypto3:arm64 (1.17-6ubuntu4.1) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-2) ...
	Setting up libroken18-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Setting up libnuma1:arm64 (2.0.12-1) ...
	Setting up dmsetup (2:1.02.167-1ubuntu1) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libkrb5-3:arm64 (1.17-6ubuntu4.1) ...
	Setting up libaio1:arm64 (0.3.112-5) ...
	Setting up openssl (1.1.1f-1ubuntu2.5) ...
	Setting up readline-common (8.0-4) ...
	Setting up publicsuffix (20200303.0012-1) ...
	Setting up libheimbase1-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3build3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Setting up libicu66:arm64 (66.1-2ubuntu2) ...
	Setting up libasn1-8-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libhcrypto4-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up ca-certificates (20210119~20.04.1) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up libwind0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libgssapi-krb5-2:arm64 (1.17-6ubuntu4.1) ...
	Setting up libssh-4:arm64 (0.9.3-2ubuntu2.1) ...
	Setting up libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.04.1) ...
	Setting up libhx509-5-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up shared-mime-info (1.15-1) ...
	Setting up libkrb5-26-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libheimntlm0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libgssapi3-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libldap-2.4-2:arm64 (2.4.49+dfsg-2ubuntu1.8) ...
	Setting up libcurl3-gnutls:arm64 (7.68.0-1ubuntu2.6) ...
	Setting up libvirt0:arm64 (6.0.0-0ubuntu8.12) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.07-1ubuntu1) ...
	Setting up dmeventd (2:1.02.167-1ubuntu1) ...
	Setting up lvm2 (2.03.07-1ubuntu1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
	Processing triggers for ca-certificates (20210119~20.04.1) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb (--install):
	 package architecture (amd64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/Users/jenkins/workspace/out/docker-machine-driver-kvm2_1.22.0-0_amd64.deb" on "ubuntu:20.04": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (72.55s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (69.45s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/Users/jenkins/workspace/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/Users/jenkins/workspace/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": exit status 1 (1m9.450281217s)

                                                
                                                
-- stdout --
	Get:1 http://ports.ubuntu.com/ubuntu-ports bionic InRelease [242 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease [88.7 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease [74.6 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease [88.7 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 Packages [11.0 MB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 Packages [1285 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports bionic/multiverse arm64 Packages [153 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports bionic/restricted arm64 Packages [572 B]
	Get:9 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 Packages [1655 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports bionic-updates/universe arm64 Packages [1945 kB]
	Get:11 http://ports.ubuntu.com/ubuntu-ports bionic-updates/restricted arm64 Packages [3990 B]
	Get:12 http://ports.ubuntu.com/ubuntu-ports bionic-updates/multiverse arm64 Packages [5548 B]
	Get:13 http://ports.ubuntu.com/ubuntu-ports bionic-backports/universe arm64 Packages [11.0 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports bionic-backports/main arm64 Packages [11.2 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports bionic-security/multiverse arm64 Packages [2819 B]
	Get:16 http://ports.ubuntu.com/ubuntu-ports bionic-security/universe arm64 Packages [1252 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports bionic-security/main arm64 Packages [1270 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports bionic-security/restricted arm64 Packages [3318 B]
	Fetched 19.1 MB in 6s (3012 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libapparmor1
	  libasn1-8-heimdal libavahi-client3 libavahi-common-data libavahi-common3
	  libcurl3-gnutls libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1
	  libexpat1 libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu60
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2app2.2 liblvm2cmd2.02 libnghttp2-14
	  libnl-3-200 libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libssl1.1
	  libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix readline-common
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql
	  thin-provisioning-tools readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libapparmor1
	  libasn1-8-heimdal libavahi-client3 libavahi-common-data libavahi-common3
	  libcurl3-gnutls libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1
	  libexpat1 libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu60
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2app2.2 liblvm2cmd2.02 libnghttp2-14
	  libnl-3-200 libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libssl1.1
	  libvirt0 libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix
	  readline-common
	0 upgraded, 51 newly installed, 0 to remove and 7 not upgraded.
	Need to get 16.2 MB of archives.
	After this operation, 62.3 MB of additional disk space will be used.
	Get:1 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libssl1.1 arm64 1.1.1-1ubuntu2.1~18.04.10 [1062 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 openssl arm64 1.1.1-1ubuntu2.1~18.04.10 [583 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 ca-certificates all 20210119~18.04.1 [147 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libapparmor1 arm64 2.12-4ubuntu5.1 [28.4 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libdbus-1-3 arm64 1.12.2-1ubuntu1.2 [152 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libexpat1 arm64 2.2.5-3ubuntu0.2 [69.3 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 dbus arm64 1.12.2-1ubuntu1.2 [130 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libdevmapper1.02.1 arm64 2:1.02.145-4.1ubuntu3.18.04.3 [100 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 dmsetup arm64 2:1.02.145-4.1ubuntu3.18.04.3 [65.1 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libicu60 arm64 60.2-3ubuntu3.1 [7987 kB]
	Get:11 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libsqlite3-0 arm64 3.22.0-1ubuntu0.4 [430 kB]
	Get:12 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libxml2 arm64 2.9.4+dfsg1-6.1ubuntu1.4 [548 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 readline-common all 7.0-3 [52.9 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 krb5-locales all 1.16-2ubuntu0.2 [13.4 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libkrb5support0 arm64 1.16-2ubuntu0.2 [28.1 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libk5crypto3 arm64 1.16-2ubuntu0.2 [79.9 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libkeyutils1 arm64 1.5.9-9.2ubuntu2 [8112 B]
	Get:18 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libkrb5-3 arm64 1.16-2ubuntu0.2 [241 kB]
	Get:19 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libgssapi-krb5-2 arm64 1.16-2ubuntu0.2 [103 kB]
	Get:20 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libnuma1 arm64 2.0.11-2.1ubuntu0.1 [19.4 kB]
	Get:21 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libpsl5 arm64 0.19.1-5build1 [40.9 kB]
	Get:22 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 publicsuffix all 20180223.1310-1 [97.6 kB]
	Get:23 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.145-4.1ubuntu3.18.04.3 [9444 B]
	Get:24 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 liblvm2cmd2.02 arm64 2.02.176-4.1ubuntu3.18.04.3 [471 kB]
	Get:25 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 dmeventd arm64 2:1.02.145-4.1ubuntu3.18.04.3 [25.9 kB]
	Get:26 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libroken18-heimdal arm64 7.5.0+dfsg-1 [35.4 kB]
	Get:27 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libasn1-8-heimdal arm64 7.5.0+dfsg-1 [130 kB]
	Get:28 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libavahi-common-data arm64 0.7-3.1ubuntu1.3 [22.2 kB]
	Get:29 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libavahi-common3 arm64 0.7-3.1ubuntu1.3 [18.4 kB]
	Get:30 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libavahi-client3 arm64 0.7-3.1ubuntu1.3 [21.9 kB]
	Get:31 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libheimbase1-heimdal arm64 7.5.0+dfsg-1 [24.9 kB]
	Get:32 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libhcrypto4-heimdal arm64 7.5.0+dfsg-1 [76.4 kB]
	Get:33 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libwind0-heimdal arm64 7.5.0+dfsg-1 [47.0 kB]
	Get:34 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libhx509-5-heimdal arm64 7.5.0+dfsg-1 [88.5 kB]
	Get:35 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libkrb5-26-heimdal arm64 7.5.0+dfsg-1 [170 kB]
	Get:36 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libheimntlm0-heimdal arm64 7.5.0+dfsg-1 [13.3 kB]
	Get:37 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libgssapi3-heimdal arm64 7.5.0+dfsg-1 [79.1 kB]
	Get:38 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libsasl2-modules-db arm64 2.1.27~101-g0780600+dfsg-3ubuntu2.3 [13.6 kB]
	Get:39 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libsasl2-2 arm64 2.1.27~101-g0780600+dfsg-3ubuntu2.3 [43.2 kB]
	Get:40 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libldap-common all 2.4.45+dfsg-1ubuntu1.10 [15.8 kB]
	Get:41 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libldap-2.4-2 arm64 2.4.45+dfsg-1ubuntu1.10 [131 kB]
	Get:42 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libnghttp2-14 arm64 1.30.0-1ubuntu1 [68.9 kB]
	Get:43 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-1 [48.2 kB]
	Get:44 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libcurl3-gnutls arm64 7.58.0-2ubuntu3.14 [184 kB]
	Get:45 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 liblvm2app2.2 arm64 2.02.176-4.1ubuntu3.18.04.3 [346 kB]
	Get:46 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libnl-3-200 arm64 3.2.29-0ubuntu3 [44.4 kB]
	Get:47 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libreadline5 arm64 5.2+dfsg-3build1 [82.1 kB]
	Get:48 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libsasl2-modules arm64 2.1.27~101-g0780600+dfsg-3ubuntu2.3 [42.0 kB]
	Get:49 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libyajl2 arm64 2.1.0-2build1 [17.7 kB]
	Get:50 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libvirt0 arm64 4.0.0-1ubuntu8.19 [1182 kB]
	Get:51 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 lvm2 arm64 2.02.176-4.1ubuntu3.18.04.3 [811 kB]
	Fetched 16.2 MB in 2s (9998 kB/s)
	Selecting previously unselected package libssl1.1:arm64.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4044 files and directories currently installed.)
	Preparing to unpack .../00-libssl1.1_1.1.1-1ubuntu2.1~18.04.10_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1-1ubuntu2.1~18.04.10) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../01-openssl_1.1.1-1ubuntu2.1~18.04.10_arm64.deb ...
	Unpacking openssl (1.1.1-1ubuntu2.1~18.04.10) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../02-ca-certificates_20210119~18.04.1_all.deb ...
	Unpacking ca-certificates (20210119~18.04.1) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../03-libapparmor1_2.12-4ubuntu5.1_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.12-4ubuntu5.1) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../04-libdbus-1-3_1.12.2-1ubuntu1.2_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.2-1ubuntu1.2) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../05-libexpat1_2.2.5-3ubuntu0.2_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.5-3ubuntu0.2) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../06-dbus_1.12.2-1ubuntu1.2_arm64.deb ...
	Unpacking dbus (1.12.2-1ubuntu1.2) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../07-libdevmapper1.02.1_2%3a1.02.145-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../08-dmsetup_2%3a1.02.145-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking dmsetup (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package libicu60:arm64.
	Preparing to unpack .../09-libicu60_60.2-3ubuntu3.1_arm64.deb ...
	Unpacking libicu60:arm64 (60.2-3ubuntu3.1) ...
	Selecting previously unselected package libsqlite3-0:arm64.
	Preparing to unpack .../10-libsqlite3-0_3.22.0-1ubuntu0.4_arm64.deb ...
	Unpacking libsqlite3-0:arm64 (3.22.0-1ubuntu0.4) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../11-libxml2_2.9.4+dfsg1-6.1ubuntu1.4_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.4+dfsg1-6.1ubuntu1.4) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../12-readline-common_7.0-3_all.deb ...
	Unpacking readline-common (7.0-3) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../13-krb5-locales_1.16-2ubuntu0.2_all.deb ...
	Unpacking krb5-locales (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../14-libkrb5support0_1.16-2ubuntu0.2_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../15-libk5crypto3_1.16-2ubuntu0.2_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../16-libkeyutils1_1.5.9-9.2ubuntu2_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.5.9-9.2ubuntu2) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../17-libkrb5-3_1.16-2ubuntu0.2_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../18-libgssapi-krb5-2_1.16-2ubuntu0.2_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../19-libnuma1_2.0.11-2.1ubuntu0.1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.11-2.1ubuntu0.1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../20-libpsl5_0.19.1-5build1_arm64.deb ...
	Unpacking libpsl5:arm64 (0.19.1-5build1) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../21-publicsuffix_20180223.1310-1_all.deb ...
	Unpacking publicsuffix (20180223.1310-1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../22-libdevmapper-event1.02.1_2%3a1.02.145-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package liblvm2cmd2.02:arm64.
	Preparing to unpack .../23-liblvm2cmd2.02_2.02.176-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking liblvm2cmd2.02:arm64 (2.02.176-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../24-dmeventd_2%3a1.02.145-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking dmeventd (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package libroken18-heimdal:arm64.
	Preparing to unpack .../25-libroken18-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libroken18-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libasn1-8-heimdal:arm64.
	Preparing to unpack .../26-libasn1-8-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libasn1-8-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libavahi-common-data:arm64.
	Preparing to unpack .../27-libavahi-common-data_0.7-3.1ubuntu1.3_arm64.deb ...
	Unpacking libavahi-common-data:arm64 (0.7-3.1ubuntu1.3) ...
	Selecting previously unselected package libavahi-common3:arm64.
	Preparing to unpack .../28-libavahi-common3_0.7-3.1ubuntu1.3_arm64.deb ...
	Unpacking libavahi-common3:arm64 (0.7-3.1ubuntu1.3) ...
	Selecting previously unselected package libavahi-client3:arm64.
	Preparing to unpack .../29-libavahi-client3_0.7-3.1ubuntu1.3_arm64.deb ...
	Unpacking libavahi-client3:arm64 (0.7-3.1ubuntu1.3) ...
	Selecting previously unselected package libheimbase1-heimdal:arm64.
	Preparing to unpack .../30-libheimbase1-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libheimbase1-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libhcrypto4-heimdal:arm64.
	Preparing to unpack .../31-libhcrypto4-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libhcrypto4-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libwind0-heimdal:arm64.
	Preparing to unpack .../32-libwind0-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libwind0-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libhx509-5-heimdal:arm64.
	Preparing to unpack .../33-libhx509-5-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libhx509-5-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libkrb5-26-heimdal:arm64.
	Preparing to unpack .../34-libkrb5-26-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libkrb5-26-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libheimntlm0-heimdal:arm64.
	Preparing to unpack .../35-libheimntlm0-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libheimntlm0-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libgssapi3-heimdal:arm64.
	Preparing to unpack .../36-libgssapi3-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libgssapi3-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../37-libsasl2-modules-db_2.1.27~101-g0780600+dfsg-3ubuntu2.3_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../38-libsasl2-2_2.1.27~101-g0780600+dfsg-3ubuntu2.3_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../39-libldap-common_2.4.45+dfsg-1ubuntu1.10_all.deb ...
	Unpacking libldap-common (2.4.45+dfsg-1ubuntu1.10) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../40-libldap-2.4-2_2.4.45+dfsg-1ubuntu1.10_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.45+dfsg-1ubuntu1.10) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../41-libnghttp2-14_1.30.0-1ubuntu1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.30.0-1ubuntu1) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../42-librtmp1_2.4+20151223.gitfa8646d.1-1_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../43-libcurl3-gnutls_7.58.0-2ubuntu3.14_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.58.0-2ubuntu3.14) ...
	Selecting previously unselected package liblvm2app2.2:arm64.
	Preparing to unpack .../44-liblvm2app2.2_2.02.176-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking liblvm2app2.2:arm64 (2.02.176-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../45-libnl-3-200_3.2.29-0ubuntu3_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.2.29-0ubuntu3) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../46-libreadline5_5.2+dfsg-3build1_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3build1) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../47-libsasl2-modules_2.1.27~101-g0780600+dfsg-3ubuntu2.3_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../48-libyajl2_2.1.0-2build1_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-2build1) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../49-libvirt0_4.0.0-1ubuntu8.19_arm64.deb ...
	Unpacking libvirt0:arm64 (4.0.0-1ubuntu8.19) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../50-lvm2_2.02.176-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking lvm2 (2.02.176-4.1ubuntu3.18.04.3) ...
	Setting up readline-common (7.0-3) ...
	Setting up libexpat1:arm64 (2.2.5-3ubuntu0.2) ...
	Setting up libicu60:arm64 (60.2-3ubuntu3.1) ...
	Setting up libnghttp2-14:arm64 (1.30.0-1ubuntu1) ...
	Setting up libldap-common (2.4.45+dfsg-1ubuntu1.10) ...
	Setting up libpsl5:arm64 (0.19.1-5build1) ...
	Setting up libnuma1:arm64 (2.0.11-2.1ubuntu0.1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Setting up libsasl2-2:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Setting up libroken18-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-1) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Setting up libkrb5support0:arm64 (1.16-2ubuntu0.2) ...
	Setting up libxml2:arm64 (2.9.4+dfsg1-6.1ubuntu1.4) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Setting up libyajl2:arm64 (2.1.0-2build1) ...
	Setting up krb5-locales (1.16-2ubuntu0.2) ...
	Setting up publicsuffix (20180223.1310-1) ...
	Setting up libapparmor1:arm64 (2.12-4ubuntu5.1) ...
	Setting up libssl1.1:arm64 (1.1.1-1ubuntu2.1~18.04.10) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/aarch64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libheimbase1-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up openssl (1.1.1-1ubuntu2.1~18.04.10) ...
	Setting up libsqlite3-0:arm64 (3.22.0-1ubuntu0.4) ...
	Setting up dmsetup (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Setting up liblvm2app2.2:arm64 (2.02.176-4.1ubuntu3.18.04.3) ...
	Setting up libkeyutils1:arm64 (1.5.9-9.2ubuntu2) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3build1) ...
	Setting up libsasl2-modules:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Setting up libnl-3-200:arm64 (3.2.29-0ubuntu3) ...
	Setting up ca-certificates (20210119~18.04.1) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/aarch64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up libdbus-1-3:arm64 (1.12.2-1ubuntu1.2) ...
	Setting up libavahi-common-data:arm64 (0.7-3.1ubuntu1.3) ...
	Setting up libk5crypto3:arm64 (1.16-2ubuntu0.2) ...
	Setting up libwind0-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libasn1-8-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libhcrypto4-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libhx509-5-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libkrb5-3:arm64 (1.16-2ubuntu0.2) ...
	Setting up libavahi-common3:arm64 (0.7-3.1ubuntu1.3) ...
	Setting up libkrb5-26-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up dbus (1.12.2-1ubuntu1.2) ...
	Setting up libheimntlm0-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libgssapi-krb5-2:arm64 (1.16-2ubuntu0.2) ...
	Setting up libavahi-client3:arm64 (0.7-3.1ubuntu1.3) ...
	Setting up libgssapi3-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libldap-2.4-2:arm64 (2.4.45+dfsg-1ubuntu1.10) ...
	Setting up libcurl3-gnutls:arm64 (7.58.0-2ubuntu3.14) ...
	Setting up libvirt0:arm64 (4.0.0-1ubuntu8.19) ...
	Setting up liblvm2cmd2.02:arm64 (2.02.176-4.1ubuntu3.18.04.3) ...
	Setting up dmeventd (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Setting up lvm2 (2.02.176-4.1ubuntu3.18.04.3) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.27-3ubuntu1.4) ...
	Processing triggers for ca-certificates (20210119~18.04.1) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb (--install):
	 package architecture (amd64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/Users/jenkins/workspace/out/docker-machine-driver-kvm2_1.22.0-0_amd64.deb" on "ubuntu:18.04": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (69.45s)

                                                
                                    
x
+
TestScheduledStopUnix (122.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20210812174431-27878 --memory=2048 --driver=docker 
E0812 17:45:16.976337   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20210812174431-27878 --memory=2048 --driver=docker : (1m18.349226525s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20210812174431-27878 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210812174431-27878 -n scheduled-stop-20210812174431-27878
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20210812174431-27878 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20210812174431-27878 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20210812174431-27878 -n scheduled-stop-20210812174431-27878
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20210812174431-27878
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20210812174431-27878 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20210812174431-27878
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20210812174431-27878: exit status 3 (2.051505661s)

                                                
                                                
-- stdout --
	scheduled-stop-20210812174431-27878
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 17:46:21.559616   35940 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 127.0.0.1:56532: connect: connection refused
	E0812 17:46:21.559644   35940 status.go:258] status error: NewSession: new client: new client: dial tcp 127.0.0.1:56532: connect: connection refused

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210812174431-27878
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 17:46:21.559616   35940 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 127.0.0.1:56532: connect: connection refused
	E0812 17:46:21.559644   35940 status.go:258] status error: NewSession: new client: new client: dial tcp 127.0.0.1:56532: connect: connection refused

                                                
                                                
** /stderr **
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-08-12 17:46:21.561306 -0700 PDT m=+2807.735578343
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210812174431-27878
helpers_test.go:232: (dbg) Done: docker inspect scheduled-stop-20210812174431-27878: (5.786861474s)
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210812174431-27878:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ebd1a94a713d2cde997c3522dc935c675defb0f604a65130579ac8bb252efb9a",
	        "Created": "2021-08-13T00:44:39.149839271Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2021-08-13T00:44:50.247619263Z",
	            "FinishedAt": "2021-08-13T00:46:20.902339814Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/ebd1a94a713d2cde997c3522dc935c675defb0f604a65130579ac8bb252efb9a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ebd1a94a713d2cde997c3522dc935c675defb0f604a65130579ac8bb252efb9a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ebd1a94a713d2cde997c3522dc935c675defb0f604a65130579ac8bb252efb9a/hosts",
	        "LogPath": "/var/lib/docker/containers/ebd1a94a713d2cde997c3522dc935c675defb0f604a65130579ac8bb252efb9a/ebd1a94a713d2cde997c3522dc935c675defb0f604a65130579ac8bb252efb9a-json.log",
	        "Name": "/scheduled-stop-20210812174431-27878",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210812174431-27878:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210812174431-27878",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c29f0cbab302c0288db42816e042d2404d77237f2ba570d4245e5ffe2a5dd6bd-init/diff:/var/lib/docker/overlay2/f715174260cb84cd45d2861bb5b8ef3bf7f57a79e1ad9faf18f7daceacacdb26/diff:/var/lib/docker/overlay2/a04c6447713cacd6930f9744ac163526e823509f0e887a4ee3657e26d18bb3c2/diff:/var/lib/docker/overlay2/f182bed44ffe14b1144a7c2f7e32e7ce023ac9e2bd863f2c8f0c91ea356c8259/diff:/var/lib/docker/overlay2/5d757d10cdc497158de4bbe8dabf9eedf14626e01df4dd6d35d490ecb30bf9c8/diff:/var/lib/docker/overlay2/422eef072395ee54e0f7179c7e52268b84bf915536d45311ae248126459657af/diff:/var/lib/docker/overlay2/e396f199b6cfca371b722f01a1e2924dfb281ea9dbf61d54b41d2fc22e6aa5c5/diff:/var/lib/docker/overlay2/21d9216959f2e55fe3dbcd4d4f8a3167e37a6c92c0d145e26cd16fd2efe2a1b2/diff:/var/lib/docker/overlay2/614f1da60876e55539ea7711a06227980406a7f5dc8c0c3b793eeda2707573a7/diff:/var/lib/docker/overlay2/0d05a121885c7c744ae4e08c64c98f8df852de51e0ff307e883b2c3fa073efe2/diff:/var/lib/docker/overlay2/3ce100
f425aa005ce54b8df8714ab8266243bb723d7a013361924636464b5c87/diff:/var/lib/docker/overlay2/cf6708a46c9ce9be9f514145a802ec8f2c769c5ea11c1f24e0c3fd20d97dd239/diff:/var/lib/docker/overlay2/8777edd041e50344e34361afdeb431abae3cba4ae7c021d7b22a2422a75fbf42/diff:/var/lib/docker/overlay2/212d3f4628f826d9ccda9072489b3e5fda2680eb2cfe50f42189c50154422be5/diff:/var/lib/docker/overlay2/c21265ca31d93d4bee835caeee6814518af67458c0446a1e12b9b1fb9f3fa8bd/diff:/var/lib/docker/overlay2/f0a961af43f72d95eb930eb0529f9e060b0e909abd40509747de016bdf83791d/diff:/var/lib/docker/overlay2/f1c8cdc84add3afd13a9cfe9d8b243943ff31e904557123ef2fc6b1eeed7799b/diff:/var/lib/docker/overlay2/f0c8c2a078356e23d25e8213bba7d4933434f906e58056d9b186842877f3bcfb/diff:/var/lib/docker/overlay2/b1a5f08de123962e7d6bb8e1ea4ea587df3f3e2c1e85f47d284477a51bf585df/diff:/var/lib/docker/overlay2/3a731dacd005bcbb7d156b44814e8a1b532129e6c39d1043c666dede672e32aa/diff:/var/lib/docker/overlay2/a16aa6163aedd0d4b7b2f4528a727ae92139eacdd31ec5b7e3db5192da8c206e/diff:/var/lib/d
ocker/overlay2/7028c72abbb7165efa88355656c2c161d4c8223e49a64d842d8313501dab8df5/diff:/var/lib/docker/overlay2/a0e99187348d94541befd8a8d0539a3a4a53cced33d78e9e109cd849383d21df/diff:/var/lib/docker/overlay2/9f8525edd155caa1bbc85060598c58d57f46446b27c0b9551a86818fbbeca52b/diff:/var/lib/docker/overlay2/a08301d3003a5683a7777b606058b4db1d148ef83d0ef4343ab7c9ca3059a45e/diff:/var/lib/docker/overlay2/18b883f6cb1aa317952b3a6c012b96254657bab2cd7d68e6ed797e606380216a/diff:/var/lib/docker/overlay2/66df4172dcaf74386e1cef7076213ff46f338dc50921929c03d105ffa2c1a68c/diff:/var/lib/docker/overlay2/53be28e913d78c4777c095cefd722c540583d4f8ce03c6ff7ca3b3c89ab37b9e/diff:/var/lib/docker/overlay2/2eb3ffcdff14b928484aa40e21392c7622808397ddde81596014c9ea1f14722d/diff:/var/lib/docker/overlay2/e31a2b59e27071979e8606deb8bba8cd284962210c0505e59e785e27197278ae/diff:/var/lib/docker/overlay2/3bc51237da47ba3959771beeb969bccc2058ae8d8e91dc367eed0354120af541/diff:/var/lib/docker/overlay2/7d005a1ac0be7776984b9bbfd904a6e1d20810ac8c7e13ed5a82610e174
cf823/diff:/var/lib/docker/overlay2/dea645fc9191a67162267978e1f134969486af076dbc70da7e3761f554a3317c/diff:/var/lib/docker/overlay2/6a3ed620466ebb13eb262fbedbb5bc90976c82d824e3c3ee8d978c8b1cfb12cc/diff:/var/lib/docker/overlay2/7a473817be0962d3e2ae1f57f32e95115af914c56a786f2d4d15a9dca232cefa/diff:/var/lib/docker/overlay2/3ca997de4525080aca8f86ad0f68f4f26acc4262a80846cfc96b3d4af8dd2526/diff:/var/lib/docker/overlay2/ad3ce384b651be2a1810da477a29e598be710b6e40f940a3bb3a4a9ed7ee048d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c29f0cbab302c0288db42816e042d2404d77237f2ba570d4245e5ffe2a5dd6bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c29f0cbab302c0288db42816e042d2404d77237f2ba570d4245e5ffe2a5dd6bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c29f0cbab302c0288db42816e042d2404d77237f2ba570d4245e5ffe2a5dd6bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210812174431-27878",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210812174431-27878/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210812174431-27878",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210812174431-27878",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210812174431-27878",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1a1160cb0e917849700598494e99b741690a17cf598c0248becc08eb1eb37bc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/c1a1160cb0e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210812174431-27878": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ebd1a94a713d",
	                        "scheduled-stop-20210812174431-27878"
	                    ],
	                    "NetworkID": "64d3fd5b98d93407af3204f0987da329fed74813334c151c4d6dfd50dbb1a525",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20210812174431-27878 -n scheduled-stop-20210812174431-27878
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20210812174431-27878 -n scheduled-stop-20210812174431-27878: exit status 7 (215.909731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-20210812174431-27878" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-20210812174431-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20210812174431-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20210812174431-27878: (6.340484682s)
--- FAIL: TestScheduledStopUnix (122.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (551.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 1 (9m11.128314544s)

                                                
                                                
-- stdout --
	* [calico-20210812175913-27878] minikube v1.22.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node calico-20210812175913-27878 in cluster calico-20210812175913-27878
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 18:25:03.943009   45360 out.go:298] Setting OutFile to fd 1 ...
	I0812 18:25:03.943149   45360 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 18:25:03.943155   45360 out.go:311] Setting ErrFile to fd 2...
	I0812 18:25:03.943158   45360 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 18:25:03.943250   45360 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 18:25:03.943605   45360 out.go:305] Setting JSON to false
	I0812 18:25:03.964025   45360 start.go:111] hostinfo: {"hostname":"37310.local","uptime":15877,"bootTime":1628802026,"procs":328,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
	W0812 18:25:03.964149   45360 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0812 18:25:03.990362   45360 out.go:177] * [calico-20210812175913-27878] minikube v1.22.0 on Darwin 11.2.3
	I0812 18:25:03.990563   45360 notify.go:169] Checking for updates...
	I0812 18:25:04.039358   45360 out.go:177]   - MINIKUBE_LOCATION=12230
	I0812 18:25:04.065318   45360 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 18:25:04.091436   45360 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0812 18:25:04.117544   45360 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 18:25:04.118191   45360 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 18:25:04.225772   45360 docker.go:132] docker version: linux-20.10.6
	I0812 18:25:04.225934   45360 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 18:25:04.432108   45360 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-13 01:25:04.362681598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 18:25:04.459524   45360 out.go:177] * Using the docker driver based on user configuration
	I0812 18:25:04.459551   45360 start.go:278] selected driver: docker
	I0812 18:25:04.459562   45360 start.go:751] validating driver "docker" against <nil>
	I0812 18:25:04.459592   45360 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0812 18:25:04.462271   45360 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 18:25:04.659238   45360 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-13 01:25:04.596630566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 18:25:04.659338   45360 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0812 18:25:04.659462   45360 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 18:25:04.659479   45360 cni.go:93] Creating CNI manager for "calico"
	I0812 18:25:04.659486   45360 start_flags.go:272] Found "Calico" CNI - setting NetworkPlugin=cni
	I0812 18:25:04.659497   45360 start_flags.go:277] config:
	{Name:calico-20210812175913-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210812175913-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 18:25:04.692945   45360 out.go:177] * Starting control plane node calico-20210812175913-27878 in cluster calico-20210812175913-27878
	I0812 18:25:04.693020   45360 cache.go:117] Beginning downloading kic base image for docker with docker
	I0812 18:25:04.717752   45360 out.go:177] * Pulling base image ...
	I0812 18:25:04.717802   45360 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 18:25:04.717841   45360 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 18:25:04.717854   45360 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0812 18:25:04.717867   45360 cache.go:56] Caching tarball of preloaded images
	I0812 18:25:04.718000   45360 preload.go:173] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0812 18:25:04.718017   45360 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0812 18:25:04.719029   45360 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/config.json ...
	I0812 18:25:04.719139   45360 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/config.json: {Name:mk32fed71e25ff4f3cd8c0c30691fdf317e35376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:25:04.854418   45360 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 18:25:04.854451   45360 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0812 18:25:04.854463   45360 cache.go:205] Successfully downloaded all kic artifacts
	I0812 18:25:04.854506   45360 start.go:313] acquiring machines lock for calico-20210812175913-27878: {Name:mkda66c22efb0f15c67da6f88c442c66bb25605e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 18:25:04.855103   45360 start.go:317] acquired machines lock for "calico-20210812175913-27878" in 583.166µs
	I0812 18:25:04.855159   45360 start.go:89] Provisioning new machine with config: &{Name:calico-20210812175913-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210812175913-27878 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 18:25:04.855255   45360 start.go:126] createHost starting for "" (driver="docker")
	I0812 18:25:04.881213   45360 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0812 18:25:04.881428   45360 start.go:160] libmachine.API.Create for "calico-20210812175913-27878" (driver="docker")
	I0812 18:25:04.881457   45360 client.go:168] LocalClient.Create starting
	I0812 18:25:04.881571   45360 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0812 18:25:04.901833   45360 main.go:130] libmachine: Decoding PEM data...
	I0812 18:25:04.901880   45360 main.go:130] libmachine: Parsing certificate...
	I0812 18:25:04.902071   45360 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0812 18:25:04.902179   45360 main.go:130] libmachine: Decoding PEM data...
	I0812 18:25:04.902214   45360 main.go:130] libmachine: Parsing certificate...
	I0812 18:25:04.903092   45360 cli_runner.go:115] Run: docker network inspect calico-20210812175913-27878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0812 18:25:05.032444   45360 cli_runner.go:162] docker network inspect calico-20210812175913-27878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0812 18:25:05.032580   45360 network_create.go:255] running [docker network inspect calico-20210812175913-27878] to gather additional debugging logs...
	I0812 18:25:05.032601   45360 cli_runner.go:115] Run: docker network inspect calico-20210812175913-27878
	W0812 18:25:05.157520   45360 cli_runner.go:162] docker network inspect calico-20210812175913-27878 returned with exit code 1
	I0812 18:25:05.157556   45360 network_create.go:258] error running [docker network inspect calico-20210812175913-27878]: docker network inspect calico-20210812175913-27878: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20210812175913-27878
	I0812 18:25:05.157568   45360 network_create.go:260] output of [docker network inspect calico-20210812175913-27878]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20210812175913-27878
	
	** /stderr **
	I0812 18:25:05.157659   45360 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0812 18:25:05.279155   45360 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001121a8] misses:0}
	I0812 18:25:05.279193   45360 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 18:25:05.279215   45360 network_create.go:106] attempt to create docker network calico-20210812175913-27878 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0812 18:25:05.279306   45360 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210812175913-27878
	W0812 18:25:05.402714   45360 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210812175913-27878 returned with exit code 1
	W0812 18:25:05.402758   45360 network_create.go:98] failed to create docker network calico-20210812175913-27878 192.168.49.0/24, will retry: subnet is taken
	I0812 18:25:05.403010   45360 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001121a8] amended:false}} dirty:map[] misses:0}
	I0812 18:25:05.403031   45360 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 18:25:05.403216   45360 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001121a8] amended:true}} dirty:map[192.168.49.0:0xc0001121a8 192.168.58.0:0xc0005b8420] misses:0}
	I0812 18:25:05.403229   45360 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 18:25:05.403237   45360 network_create.go:106] attempt to create docker network calico-20210812175913-27878 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0812 18:25:05.403321   45360 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210812175913-27878
	I0812 18:25:07.659969   45360 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210812175913-27878: (2.256592923s)
	I0812 18:25:07.659996   45360 network_create.go:90] docker network calico-20210812175913-27878 192.168.58.0/24 created
	I0812 18:25:07.660013   45360 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20210812175913-27878" container
	I0812 18:25:07.660131   45360 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0812 18:25:07.782496   45360 cli_runner.go:115] Run: docker volume create calico-20210812175913-27878 --label name.minikube.sigs.k8s.io=calico-20210812175913-27878 --label created_by.minikube.sigs.k8s.io=true
	I0812 18:25:07.902701   45360 oci.go:102] Successfully created a docker volume calico-20210812175913-27878
	I0812 18:25:07.902817   45360 cli_runner.go:115] Run: docker run --rm --name calico-20210812175913-27878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210812175913-27878 --entrypoint /usr/bin/test -v calico-20210812175913-27878:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0812 18:25:08.413412   45360 oci.go:106] Successfully prepared a docker volume calico-20210812175913-27878
	I0812 18:25:08.413485   45360 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 18:25:08.413504   45360 kic.go:179] Starting extracting preloaded images to volume ...
	I0812 18:25:08.413571   45360 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0812 18:25:08.413593   45360 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210812175913-27878:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0812 18:25:08.628409   45360 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210812175913-27878 --name calico-20210812175913-27878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210812175913-27878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210812175913-27878 --network calico-20210812175913-27878 --ip 192.168.58.2 --volume calico-20210812175913-27878:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0812 18:25:12.773459   45360 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210812175913-27878 --name calico-20210812175913-27878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210812175913-27878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210812175913-27878 --network calico-20210812175913-27878 --ip 192.168.58.2 --volume calico-20210812175913-27878:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79: (4.144971919s)
	I0812 18:25:12.773574   45360 cli_runner.go:115] Run: docker container inspect calico-20210812175913-27878 --format={{.State.Running}}
	I0812 18:25:12.944167   45360 cli_runner.go:115] Run: docker container inspect calico-20210812175913-27878 --format={{.State.Status}}
	I0812 18:25:13.092863   45360 cli_runner.go:115] Run: docker exec calico-20210812175913-27878 stat /var/lib/dpkg/alternatives/iptables
	I0812 18:25:13.303726   45360 oci.go:278] the created container "calico-20210812175913-27878" has a running status.
	I0812 18:25:13.303774   45360 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210812175913-27878/id_rsa...
	I0812 18:25:13.762422   45360 kic_runner.go:188] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210812175913-27878/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0812 18:25:14.131633   45360 cli_runner.go:115] Run: docker container inspect calico-20210812175913-27878 --format={{.State.Status}}
	I0812 18:25:14.266546   45360 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210812175913-27878:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (5.852843832s)
	I0812 18:25:14.266576   45360 kic.go:188] duration metric: took 5.853053 seconds to extract preloaded images to volume
	I0812 18:25:14.286573   45360 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0812 18:25:14.286595   45360 kic_runner.go:115] Args: [docker exec --privileged calico-20210812175913-27878 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0812 18:25:14.473604   45360 cli_runner.go:115] Run: docker container inspect calico-20210812175913-27878 --format={{.State.Status}}
	I0812 18:25:14.602740   45360 machine.go:88] provisioning docker machine ...
	I0812 18:25:14.602786   45360 ubuntu.go:169] provisioning hostname "calico-20210812175913-27878"
	I0812 18:25:14.602931   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:14.732394   45360 main.go:130] libmachine: Using SSH client type: native
	I0812 18:25:14.732596   45360 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 58433 <nil> <nil>}
	I0812 18:25:14.732611   45360 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20210812175913-27878 && echo "calico-20210812175913-27878" | sudo tee /etc/hostname
	I0812 18:25:14.868620   45360 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20210812175913-27878
	
	I0812 18:25:14.868731   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:14.995151   45360 main.go:130] libmachine: Using SSH client type: native
	I0812 18:25:14.995387   45360 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 58433 <nil> <nil>}
	I0812 18:25:14.995405   45360 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210812175913-27878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210812175913-27878/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210812175913-27878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 18:25:15.119601   45360 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0812 18:25:15.119626   45360 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0812 18:25:15.119653   45360 ubuntu.go:177] setting up certificates
	I0812 18:25:15.119661   45360 provision.go:83] configureAuth start
	I0812 18:25:15.119748   45360 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210812175913-27878
	I0812 18:25:15.256346   45360 provision.go:137] copyHostCerts
	I0812 18:25:15.256458   45360 exec_runner.go:145] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0812 18:25:15.256469   45360 exec_runner.go:190] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0812 18:25:15.256853   45360 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0812 18:25:15.257066   45360 exec_runner.go:145] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0812 18:25:15.257078   45360 exec_runner.go:190] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0812 18:25:15.257153   45360 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0812 18:25:15.257315   45360 exec_runner.go:145] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0812 18:25:15.257322   45360 exec_runner.go:190] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0812 18:25:15.257387   45360 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0812 18:25:15.257547   45360 provision.go:111] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.calico-20210812175913-27878 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210812175913-27878]
	I0812 18:25:15.373955   45360 provision.go:171] copyRemoteCerts
	I0812 18:25:15.374216   45360 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 18:25:15.374281   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:15.515051   45360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58433 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210812175913-27878/id_rsa Username:docker}
	I0812 18:25:15.607545   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0812 18:25:15.626544   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 18:25:15.650707   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 18:25:15.673972   45360 provision.go:86] duration metric: configureAuth took 554.294663ms
	I0812 18:25:15.673986   45360 ubuntu.go:193] setting minikube options for container-runtime
	I0812 18:25:15.674249   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:15.813890   45360 main.go:130] libmachine: Using SSH client type: native
	I0812 18:25:15.814057   45360 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 58433 <nil> <nil>}
	I0812 18:25:15.814073   45360 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0812 18:25:15.938168   45360 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0812 18:25:15.938189   45360 ubuntu.go:71] root file system type: overlay
	I0812 18:25:15.938448   45360 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0812 18:25:15.938564   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:16.078267   45360 main.go:130] libmachine: Using SSH client type: native
	I0812 18:25:16.078428   45360 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 58433 <nil> <nil>}
	I0812 18:25:16.078488   45360 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0812 18:25:16.206241   45360 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0812 18:25:16.206387   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:16.345320   45360 main.go:130] libmachine: Using SSH client type: native
	I0812 18:25:16.345501   45360 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 58433 <nil> <nil>}
	I0812 18:25:16.345516   45360 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0812 18:25:34.908466   45360 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:54:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-13 01:25:16.204182053 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0812 18:25:34.908506   45360 machine.go:91] provisioned docker machine in 20.305675593s
	I0812 18:25:34.908513   45360 client.go:171] LocalClient.Create took 30.026955345s
	I0812 18:25:34.908531   45360 start.go:168] duration metric: libmachine.API.Create for "calico-20210812175913-27878" took 30.027005146s
	I0812 18:25:34.908542   45360 start.go:267] post-start starting for "calico-20210812175913-27878" (driver="docker")
	I0812 18:25:34.908546   45360 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 18:25:34.908652   45360 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 18:25:34.908738   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:35.032245   45360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58433 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210812175913-27878/id_rsa Username:docker}
	I0812 18:25:35.120003   45360 ssh_runner.go:149] Run: cat /etc/os-release
	I0812 18:25:35.123846   45360 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0812 18:25:35.123865   45360 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0812 18:25:35.123871   45360 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0812 18:25:35.123879   45360 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0812 18:25:35.123894   45360 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0812 18:25:35.124289   45360 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0812 18:25:35.125061   45360 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem -> 278782.pem in /etc/ssl/certs
	I0812 18:25:35.125252   45360 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0812 18:25:35.132370   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem --> /etc/ssl/certs/278782.pem (1708 bytes)
	I0812 18:25:35.151617   45360 start.go:270] post-start completed in 243.064997ms
	I0812 18:25:35.152146   45360 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210812175913-27878
	I0812 18:25:35.273991   45360 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/config.json ...
	I0812 18:25:35.274528   45360 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 18:25:35.274616   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:35.399911   45360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58433 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210812175913-27878/id_rsa Username:docker}
	I0812 18:25:35.485456   45360 start.go:129] duration metric: createHost completed in 30.630091004s
	I0812 18:25:35.485475   45360 start.go:80] releasing machines lock for "calico-20210812175913-27878", held for 30.630262594s
	I0812 18:25:35.485606   45360 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210812175913-27878
	I0812 18:25:35.612935   45360 ssh_runner.go:149] Run: systemctl --version
	I0812 18:25:35.613013   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:35.613117   45360 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0812 18:25:35.614222   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:35.744959   45360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58433 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210812175913-27878/id_rsa Username:docker}
	I0812 18:25:35.744989   45360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58433 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210812175913-27878/id_rsa Username:docker}
	I0812 18:25:35.931937   45360 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0812 18:25:35.941425   45360 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0812 18:25:35.951501   45360 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0812 18:25:35.951576   45360 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0812 18:25:35.961677   45360 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 18:25:35.975008   45360 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0812 18:25:36.032655   45360 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0812 18:25:36.089960   45360 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0812 18:25:36.100052   45360 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0812 18:25:36.159093   45360 ssh_runner.go:149] Run: sudo systemctl start docker
	I0812 18:25:36.169470   45360 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0812 18:25:36.221314   45360 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0812 18:25:36.323170   45360 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0812 18:25:36.323277   45360 cli_runner.go:115] Run: docker exec -t calico-20210812175913-27878 dig +short host.docker.internal
	I0812 18:25:36.521232   45360 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0812 18:25:36.521877   45360 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0812 18:25:36.526808   45360 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 18:25:36.540381   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:25:36.668454   45360 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 18:25:36.668549   45360 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 18:25:36.711607   45360 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0812 18:25:36.711619   45360 docker.go:466] Images already preloaded, skipping extraction
	I0812 18:25:36.711710   45360 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 18:25:36.749714   45360 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0812 18:25:36.749730   45360 cache_images.go:74] Images are preloaded, skipping loading
	I0812 18:25:36.749835   45360 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0812 18:25:36.842791   45360 cni.go:93] Creating CNI manager for "calico"
	I0812 18:25:36.842811   45360 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0812 18:25:36.842824   45360 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210812175913-27878 NodeName:calico-20210812175913-27878 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0812 18:25:36.842932   45360 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20210812175913-27878"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 18:25:36.843028   45360 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20210812175913-27878 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:calico-20210812175913-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0812 18:25:36.843101   45360 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0812 18:25:36.850920   45360 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 18:25:36.850988   45360 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 18:25:36.859203   45360 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0812 18:25:36.876076   45360 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 18:25:36.889542   45360 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0812 18:25:36.903178   45360 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0812 18:25:36.907679   45360 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 18:25:36.918067   45360 certs.go:52] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878 for IP: 192.168.58.2
	I0812 18:25:36.918168   45360 certs.go:179] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0812 18:25:36.918204   45360 certs.go:179] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0812 18:25:36.918278   45360 certs.go:294] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/client.key
	I0812 18:25:36.918291   45360 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/client.crt with IP's: []
	I0812 18:25:37.248773   45360 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/client.crt ...
	I0812 18:25:37.248789   45360 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/client.crt: {Name:mk21433ac8ebf608e3c4dd6d61ed3cfef693b01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:25:37.250020   45360 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/client.key ...
	I0812 18:25:37.250037   45360 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/client.key: {Name:mk01eaf8e6965281ef581384393a561832a0cf75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:25:37.250655   45360 certs.go:294] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.key.cee25041
	I0812 18:25:37.250663   45360 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0812 18:25:37.305628   45360 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.crt.cee25041 ...
	I0812 18:25:37.305642   45360 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.crt.cee25041: {Name:mk2a5338c322ca1ca429293ca72406f6ad6826f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:25:37.306278   45360 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.key.cee25041 ...
	I0812 18:25:37.306293   45360 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.key.cee25041: {Name:mk2d0bdc1e1ee4ed02428252feb9d6cbdcae3a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:25:37.306474   45360 certs.go:305] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.crt
	I0812 18:25:37.306636   45360 certs.go:309] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.key
	I0812 18:25:37.306792   45360 certs.go:294] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/proxy-client.key
	I0812 18:25:37.306798   45360 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/proxy-client.crt with IP's: []
	I0812 18:25:37.470519   45360 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/proxy-client.crt ...
	I0812 18:25:37.470533   45360 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/proxy-client.crt: {Name:mk269c90f273645eee18e30e80608c7341a308dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:25:37.471566   45360 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/proxy-client.key ...
	I0812 18:25:37.471579   45360 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/proxy-client.key: {Name:mk7976dcd09c1a41b4c3f99d7319795c42b2dcac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:25:37.472353   45360 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/27878.pem (1338 bytes)
	W0812 18:25:37.472402   45360 certs.go:369] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/27878_empty.pem, impossibly tiny 0 bytes
	I0812 18:25:37.472414   45360 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 18:25:37.472451   45360 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0812 18:25:37.472486   45360 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0812 18:25:37.472524   45360 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0812 18:25:37.472597   45360 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem (1708 bytes)
	I0812 18:25:37.473409   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0812 18:25:37.490998   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 18:25:37.507922   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 18:25:37.524346   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210812175913-27878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 18:25:37.540940   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 18:25:37.557950   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0812 18:25:37.575915   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 18:25:37.592657   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0812 18:25:37.609994   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/27878.pem --> /usr/share/ca-certificates/27878.pem (1338 bytes)
	I0812 18:25:37.626768   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem --> /usr/share/ca-certificates/278782.pem (1708 bytes)
	I0812 18:25:37.643799   45360 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 18:25:37.661050   45360 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 18:25:37.674767   45360 ssh_runner.go:149] Run: openssl version
	I0812 18:25:37.680561   45360 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/278782.pem && ln -fs /usr/share/ca-certificates/278782.pem /etc/ssl/certs/278782.pem"
	I0812 18:25:37.688769   45360 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/278782.pem
	I0812 18:25:37.692845   45360 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 00:03 /usr/share/ca-certificates/278782.pem
	I0812 18:25:37.692918   45360 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278782.pem
	I0812 18:25:37.698305   45360 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/278782.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 18:25:37.705828   45360 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 18:25:37.713660   45360 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 18:25:37.717674   45360 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 00:01 /usr/share/ca-certificates/minikubeCA.pem
	I0812 18:25:37.717720   45360 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 18:25:37.723383   45360 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 18:25:37.731149   45360 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27878.pem && ln -fs /usr/share/ca-certificates/27878.pem /etc/ssl/certs/27878.pem"
	I0812 18:25:37.738893   45360 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/27878.pem
	I0812 18:25:37.742849   45360 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 00:03 /usr/share/ca-certificates/27878.pem
	I0812 18:25:37.742897   45360 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27878.pem
	I0812 18:25:37.748570   45360 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27878.pem /etc/ssl/certs/51391683.0"
	I0812 18:25:37.756296   45360 kubeadm.go:390] StartCluster: {Name:calico-20210812175913-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210812175913-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 18:25:37.756408   45360 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 18:25:37.789649   45360 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 18:25:37.797184   45360 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 18:25:37.804213   45360 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0812 18:25:37.804282   45360 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 18:25:37.811883   45360 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 18:25:37.811909   45360 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0812 18:25:38.603081   45360 out.go:204]   - Generating certificates and keys ...
	I0812 18:25:41.018656   45360 out.go:204]   - Booting up control plane ...
	I0812 18:25:59.559366   45360 out.go:204]   - Configuring RBAC rules ...
	I0812 18:25:59.943493   45360 cni.go:93] Creating CNI manager for "calico"
	I0812 18:25:59.969655   45360 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0812 18:25:59.969962   45360 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0812 18:25:59.969973   45360 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (202053 bytes)
	I0812 18:25:59.991583   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0812 18:26:00.976056   45360 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 18:26:00.976125   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:00.976153   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=calico-20210812175913-27878 minikube.k8s.io/updated_at=2021_08_12T18_26_00_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:00.995227   45360 ops.go:34] apiserver oom_adj: -16
	I0812 18:26:01.283086   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:01.901580   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:02.405851   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:02.903384   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:03.402296   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:03.904018   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:04.402255   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:04.905875   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:05.401730   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:05.902116   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:06.401565   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:06.901739   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:07.403235   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:07.902712   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:08.401672   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:08.903365   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:09.401606   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:09.908884   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:10.401660   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:10.901578   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:11.402061   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:11.905926   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:12.404324   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:12.901705   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:13.402027   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:13.902383   45360 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:26:13.992759   45360 kubeadm.go:985] duration metric: took 13.016672427s to wait for elevateKubeSystemPrivileges.
	I0812 18:26:13.992776   45360 kubeadm.go:392] StartCluster complete in 36.236369418s
	I0812 18:26:13.992790   45360 settings.go:142] acquiring lock: {Name:mk3e1d203e6439798c8d384e90b2bc232b4914ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:26:13.992885   45360 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 18:26:13.995024   45360 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mka81e290e52453cdddcec52ed4fa17d888b133f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:26:14.566999   45360 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210812175913-27878" rescaled to 1
	I0812 18:26:14.567037   45360 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 18:26:14.567055   45360 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 18:26:14.567080   45360 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0812 18:26:14.567135   45360 addons.go:59] Setting storage-provisioner=true in profile "calico-20210812175913-27878"
	I0812 18:26:14.594596   45360 out.go:177] * Verifying Kubernetes components...
	I0812 18:26:14.567184   45360 addons.go:59] Setting default-storageclass=true in profile "calico-20210812175913-27878"
	I0812 18:26:14.594631   45360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210812175913-27878"
	I0812 18:26:14.594630   45360 addons.go:135] Setting addon storage-provisioner=true in "calico-20210812175913-27878"
	W0812 18:26:14.594647   45360 addons.go:147] addon storage-provisioner should already be in state true
	I0812 18:26:14.594685   45360 host.go:66] Checking if "calico-20210812175913-27878" exists ...
	I0812 18:26:14.594698   45360 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 18:26:14.595146   45360 cli_runner.go:115] Run: docker container inspect calico-20210812175913-27878 --format={{.State.Status}}
	I0812 18:26:14.595159   45360 cli_runner.go:115] Run: docker container inspect calico-20210812175913-27878 --format={{.State.Status}}
	I0812 18:26:14.664759   45360 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 18:26:14.664823   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:26:14.771589   45360 addons.go:135] Setting addon default-storageclass=true in "calico-20210812175913-27878"
	I0812 18:26:14.788664   45360 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0812 18:26:14.788665   45360 addons.go:147] addon default-storageclass should already be in state true
	I0812 18:26:14.788703   45360 host.go:66] Checking if "calico-20210812175913-27878" exists ...
	I0812 18:26:14.788835   45360 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 18:26:14.788849   45360 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 18:26:14.788940   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:26:14.790856   45360 cli_runner.go:115] Run: docker container inspect calico-20210812175913-27878 --format={{.State.Status}}
	I0812 18:26:14.851687   45360 node_ready.go:35] waiting up to 5m0s for node "calico-20210812175913-27878" to be "Ready" ...
	I0812 18:26:14.856908   45360 node_ready.go:49] node "calico-20210812175913-27878" has status "Ready":"True"
	I0812 18:26:14.856921   45360 node_ready.go:38] duration metric: took 5.197825ms waiting for node "calico-20210812175913-27878" to be "Ready" ...
	I0812 18:26:14.856929   45360 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 18:26:14.870119   45360 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace to be "Ready" ...
	I0812 18:26:14.971708   45360 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 18:26:14.971732   45360 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 18:26:14.971821   45360 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210812175913-27878
	I0812 18:26:14.976103   45360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58433 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210812175913-27878/id_rsa Username:docker}
	I0812 18:26:15.065720   45360 start.go:736] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0812 18:26:15.138487   45360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58433 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210812175913-27878/id_rsa Username:docker}
	I0812 18:26:15.151697   45360 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 18:26:15.269014   45360 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 18:26:15.906884   45360 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0812 18:26:15.906923   45360 addons.go:344] enableAddons completed in 1.33983346s
	I0812 18:26:16.894696   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:19.394346   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:21.395383   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:23.459882   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:25.896230   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:28.395014   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:30.396791   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:32.894807   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:34.896583   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:36.930806   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:39.395232   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:41.397072   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:43.893923   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:45.895525   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:47.924894   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:50.392988   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:52.393477   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:54.396065   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:56.897690   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:26:59.393331   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:01.890839   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:04.391176   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:06.393132   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:08.892880   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:11.394893   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:13.397063   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:15.399843   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:17.897029   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:20.398272   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:22.399801   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:24.399996   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:26.897459   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:28.899148   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:31.397667   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:33.397688   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:35.397810   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:37.897508   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:40.396465   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:42.400079   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:44.903445   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:47.397361   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:49.398748   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:51.898851   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:54.398931   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:56.897078   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:27:58.897390   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:00.897512   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:03.397783   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:05.398339   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:07.895828   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:10.394857   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:12.399297   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:14.897218   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:17.396782   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:19.890532   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:22.390642   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:24.393335   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:26.890696   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:28.891819   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:31.391662   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:33.392906   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:35.397283   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:37.400980   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:39.401257   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:41.895238   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:43.896751   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:45.897679   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:47.898134   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:49.898959   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:52.394429   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:54.394771   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:56.400683   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:28:58.897477   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:00.897564   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:03.394742   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:05.900590   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:08.395981   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:10.399903   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:12.895811   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:14.898263   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:16.899717   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:19.412088   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:21.897750   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:24.410707   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:26.898416   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:29.398433   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:31.899563   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:34.401070   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:36.896155   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:39.395131   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:41.402005   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:43.895473   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:46.391843   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:48.391929   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:50.392233   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:52.889535   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:54.894577   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:56.899109   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:29:59.395621   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:01.402755   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:03.891257   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:06.390646   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:08.899787   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:11.390450   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:13.390716   45360 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:14.898988   45360 pod_ready.go:81] duration metric: took 4m0.028028598s waiting for pod "calico-kube-controllers-85ff9ff759-lrdjc" in "kube-system" namespace to be "Ready" ...
	E0812 18:30:14.899000   45360 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0812 18:30:14.899015   45360 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-pw4bw" in "kube-system" namespace to be "Ready" ...
	I0812 18:30:16.911751   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:19.411024   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:21.413592   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:23.912893   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:26.413763   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:28.416235   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:30.910921   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:32.911338   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:35.416511   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:37.912421   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:39.913473   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:42.414612   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:44.913145   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:47.412353   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:49.912799   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:52.411953   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:54.413336   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:56.910683   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:30:58.913141   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:01.413282   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:03.912496   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:06.411358   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:08.414716   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:10.910363   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:12.913253   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:14.914372   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:17.411813   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:19.413258   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:21.916440   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:24.416913   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:26.911321   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:28.912271   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:31.413791   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:33.912422   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:36.410974   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:38.415589   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:40.913675   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:42.916389   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:45.412325   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:47.413059   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:49.413460   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:51.912023   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:54.417643   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:56.910990   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:31:58.912757   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:01.412595   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:03.417169   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:05.912946   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:08.412520   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:10.414014   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:12.415843   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:14.421926   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:16.911472   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:18.912104   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:21.411513   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:23.415205   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:25.911906   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:27.916565   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:30.419076   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:32.915755   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:35.411439   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:37.911245   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:39.915760   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:42.412017   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:44.412500   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:46.915436   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:48.916697   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:51.415180   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:53.913080   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:56.411408   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:32:58.415309   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:00.415644   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:02.918787   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:04.920259   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:07.413689   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:09.920807   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:12.412601   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:14.915441   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:17.413140   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:19.413929   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:21.913642   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:24.417981   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:26.913556   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:28.920235   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:31.411835   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:33.418246   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:35.919408   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:38.418294   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:40.915333   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:43.413027   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:45.417257   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:47.981580   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:49.981812   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:51.986781   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:54.481767   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:56.481863   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:33:58.482977   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:34:00.486547   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:34:02.983890   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:34:04.985126   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:34:07.482537   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:34:09.988209   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:34:12.491137   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:34:14.988155   45360 pod_ready.go:102] pod "calico-node-pw4bw" in "kube-system" namespace has status "Ready":"False"
	I0812 18:34:14.995417   45360 pod_ready.go:81] duration metric: took 4m0.025884456s waiting for pod "calico-node-pw4bw" in "kube-system" namespace to be "Ready" ...
	E0812 18:34:14.995428   45360 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0812 18:34:14.995444   45360 pod_ready.go:38] duration metric: took 8m0.067174093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 18:34:15.022987   45360 out.go:177] 
	W0812 18:34:15.023100   45360 out.go:242] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0812 18:34:15.023109   45360 out.go:242] * 
	* 
	I0812 18:34:15.024369   45360 main.go:116] stdlog: detect_unix.go:31 open /proc/sys/kernel/osrelease: no such file or directory

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 1
--- FAIL: TestNetworkPlugins/group/calico/Start (551.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (292.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:31:03.742374   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138079358s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:31:24.222825   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161020977s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158796138s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132962866s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:32:05.183696   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:32:13.808864   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 18:32:16.415090   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158414206s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135585492s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:32:43.912537   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15201645s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0812 18:33:02.195149   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:33:11.636093   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:33:15.085056   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:33:17.940316   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 18:33:27.104254   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16125755s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0812 18:33:29.936734   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:29.942585   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:29.952900   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:29.973977   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:30.015084   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:30.101803   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:30.262158   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:30.585741   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:31.227035   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:32.508232   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:35.068470   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:33:40.189414   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:33:50.508295   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:34:03.107873   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128050511s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0812 18:34:10.992142   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:34:51.954633   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146576644s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0812 18:35:00.331613   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:35:16.986109   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:35:43.283556   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144070507s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got=";; connection timed out; no servers could be reached\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (292.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (274.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : signal: killed (4m34.796375592s)

                                                
                                                
-- stdout --
	* [kindnet-20210812175913-27878] minikube v1.22.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node kindnet-20210812175913-27878 in cluster kindnet-20210812175913-27878
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 18:34:39.123730   46378 out.go:298] Setting OutFile to fd 1 ...
	I0812 18:34:39.123868   46378 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 18:34:39.123872   46378 out.go:311] Setting ErrFile to fd 2...
	I0812 18:34:39.123875   46378 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 18:34:39.123962   46378 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 18:34:39.124293   46378 out.go:305] Setting JSON to false
	I0812 18:34:39.144138   46378 start.go:111] hostinfo: {"hostname":"37310.local","uptime":16453,"bootTime":1628802026,"procs":336,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
	W0812 18:34:39.144220   46378 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0812 18:34:39.171292   46378 out.go:177] * [kindnet-20210812175913-27878] minikube v1.22.0 on Darwin 11.2.3
	I0812 18:34:39.171435   46378 notify.go:169] Checking for updates...
	I0812 18:34:39.218938   46378 out.go:177]   - MINIKUBE_LOCATION=12230
	I0812 18:34:39.245098   46378 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 18:34:39.271064   46378 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0812 18:34:39.296872   46378 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 18:34:39.297394   46378 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 18:34:39.400749   46378 docker.go:132] docker version: linux-20.10.6
	I0812 18:34:39.400882   46378 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 18:34:39.595050   46378 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-13 01:34:39.534625691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 18:34:39.622306   46378 out.go:177] * Using the docker driver based on user configuration
	I0812 18:34:39.622346   46378 start.go:278] selected driver: docker
	I0812 18:34:39.622362   46378 start.go:751] validating driver "docker" against <nil>
	I0812 18:34:39.622383   46378 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0812 18:34:39.626490   46378 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 18:34:39.818428   46378 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-13 01:34:39.759504241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 18:34:39.818533   46378 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0812 18:34:39.818666   46378 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 18:34:39.818681   46378 cni.go:93] Creating CNI manager for "kindnet"
	I0812 18:34:39.818692   46378 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0812 18:34:39.818697   46378 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0812 18:34:39.818701   46378 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 18:34:39.818713   46378 start_flags.go:277] config:
	{Name:kindnet-20210812175913-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:kindnet-20210812175913-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 18:34:39.845738   46378 out.go:177] * Starting control plane node kindnet-20210812175913-27878 in cluster kindnet-20210812175913-27878
	I0812 18:34:39.845798   46378 cache.go:117] Beginning downloading kic base image for docker with docker
	I0812 18:34:39.872304   46378 out.go:177] * Pulling base image ...
	I0812 18:34:39.872409   46378 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 18:34:39.872478   46378 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 18:34:39.872488   46378 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0812 18:34:39.872504   46378 cache.go:56] Caching tarball of preloaded images
	I0812 18:34:39.872690   46378 preload.go:173] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0812 18:34:39.872713   46378 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0812 18:34:39.874320   46378 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/config.json ...
	I0812 18:34:39.874500   46378 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/config.json: {Name:mk71a6dab1b6fa2d9643fea4c8be3d67a1cbd7a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:34:39.999757   46378 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 18:34:39.999783   46378 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0812 18:34:39.999794   46378 cache.go:205] Successfully downloaded all kic artifacts
	I0812 18:34:39.999845   46378 start.go:313] acquiring machines lock for kindnet-20210812175913-27878: {Name:mk0d8ddad0ec0f376dbd3b65e2b478d958808a6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 18:34:40.000254   46378 start.go:317] acquired machines lock for "kindnet-20210812175913-27878" in 397.215µs
	I0812 18:34:40.000289   46378 start.go:89] Provisioning new machine with config: &{Name:kindnet-20210812175913-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:kindnet-20210812175913-27878 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 18:34:40.000359   46378 start.go:126] createHost starting for "" (driver="docker")
	I0812 18:34:40.026170   46378 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0812 18:34:40.026350   46378 start.go:160] libmachine.API.Create for "kindnet-20210812175913-27878" (driver="docker")
	I0812 18:34:40.026374   46378 client.go:168] LocalClient.Create starting
	I0812 18:34:40.026450   46378 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0812 18:34:40.047444   46378 main.go:130] libmachine: Decoding PEM data...
	I0812 18:34:40.047495   46378 main.go:130] libmachine: Parsing certificate...
	I0812 18:34:40.047708   46378 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0812 18:34:40.047787   46378 main.go:130] libmachine: Decoding PEM data...
	I0812 18:34:40.047807   46378 main.go:130] libmachine: Parsing certificate...
	I0812 18:34:40.048893   46378 cli_runner.go:115] Run: docker network inspect kindnet-20210812175913-27878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0812 18:34:40.171135   46378 cli_runner.go:162] docker network inspect kindnet-20210812175913-27878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0812 18:34:40.171247   46378 network_create.go:255] running [docker network inspect kindnet-20210812175913-27878] to gather additional debugging logs...
	I0812 18:34:40.171267   46378 cli_runner.go:115] Run: docker network inspect kindnet-20210812175913-27878
	W0812 18:34:40.293401   46378 cli_runner.go:162] docker network inspect kindnet-20210812175913-27878 returned with exit code 1
	I0812 18:34:40.293431   46378 network_create.go:258] error running [docker network inspect kindnet-20210812175913-27878]: docker network inspect kindnet-20210812175913-27878: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20210812175913-27878
	I0812 18:34:40.293456   46378 network_create.go:260] output of [docker network inspect kindnet-20210812175913-27878]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20210812175913-27878
	
	** /stderr **
	I0812 18:34:40.293557   46378 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0812 18:34:40.413501   46378 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000eb50] misses:0}
	I0812 18:34:40.413540   46378 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 18:34:40.413560   46378 network_create.go:106] attempt to create docker network kindnet-20210812175913-27878 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0812 18:34:40.413645   46378 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210812175913-27878
	W0812 18:34:40.536585   46378 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210812175913-27878 returned with exit code 1
	W0812 18:34:40.536630   46378 network_create.go:98] failed to create docker network kindnet-20210812175913-27878 192.168.49.0/24, will retry: subnet is taken
	I0812 18:34:40.536843   46378 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000eb50] amended:false}} dirty:map[] misses:0}
	I0812 18:34:40.536862   46378 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 18:34:40.537037   46378 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000eb50] amended:true}} dirty:map[192.168.49.0:0xc00000eb50 192.168.58.0:0xc00012c338] misses:0}
	I0812 18:34:40.537056   46378 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 18:34:40.537063   46378 network_create.go:106] attempt to create docker network kindnet-20210812175913-27878 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0812 18:34:40.537135   46378 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210812175913-27878
	I0812 18:34:46.676265   46378 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210812175913-27878: (6.138909526s)
	I0812 18:34:46.676289   46378 network_create.go:90] docker network kindnet-20210812175913-27878 192.168.58.0/24 created
	I0812 18:34:46.676309   46378 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20210812175913-27878" container
	I0812 18:34:46.676424   46378 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0812 18:34:46.797557   46378 cli_runner.go:115] Run: docker volume create kindnet-20210812175913-27878 --label name.minikube.sigs.k8s.io=kindnet-20210812175913-27878 --label created_by.minikube.sigs.k8s.io=true
	I0812 18:34:46.920276   46378 oci.go:102] Successfully created a docker volume kindnet-20210812175913-27878
	I0812 18:34:46.920425   46378 cli_runner.go:115] Run: docker run --rm --name kindnet-20210812175913-27878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210812175913-27878 --entrypoint /usr/bin/test -v kindnet-20210812175913-27878:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0812 18:34:47.420241   46378 oci.go:106] Successfully prepared a docker volume kindnet-20210812175913-27878
	I0812 18:34:47.420306   46378 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 18:34:47.420324   46378 kic.go:179] Starting extracting preloaded images to volume ...
	I0812 18:34:47.420369   46378 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0812 18:34:47.420413   46378 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210812175913-27878:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0812 18:34:47.640258   46378 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20210812175913-27878 --name kindnet-20210812175913-27878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210812175913-27878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20210812175913-27878 --network kindnet-20210812175913-27878 --ip 192.168.58.2 --volume kindnet-20210812175913-27878:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0812 18:34:52.981551   46378 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210812175913-27878:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (5.560942292s)
	I0812 18:34:52.981574   46378 kic.go:188] duration metric: took 5.561099 seconds to extract preloaded images to volume
	I0812 18:35:00.696123   46378 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20210812175913-27878 --name kindnet-20210812175913-27878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210812175913-27878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20210812175913-27878 --network kindnet-20210812175913-27878 --ip 192.168.58.2 --volume kindnet-20210812175913-27878:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79: (13.05543151s)
	I0812 18:35:00.697823   46378 cli_runner.go:115] Run: docker container inspect kindnet-20210812175913-27878 --format={{.State.Running}}
	I0812 18:35:00.825994   46378 cli_runner.go:115] Run: docker container inspect kindnet-20210812175913-27878 --format={{.State.Status}}
	I0812 18:35:00.953344   46378 cli_runner.go:115] Run: docker exec kindnet-20210812175913-27878 stat /var/lib/dpkg/alternatives/iptables
	I0812 18:35:01.133439   46378 oci.go:278] the created container "kindnet-20210812175913-27878" has a running status.
	I0812 18:35:01.133475   46378 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/kindnet-20210812175913-27878/id_rsa...
	I0812 18:35:01.233091   46378 kic_runner.go:188] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/kindnet-20210812175913-27878/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0812 18:35:01.415417   46378 cli_runner.go:115] Run: docker container inspect kindnet-20210812175913-27878 --format={{.State.Status}}
	I0812 18:35:01.540621   46378 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0812 18:35:01.540641   46378 kic_runner.go:115] Args: [docker exec --privileged kindnet-20210812175913-27878 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0812 18:35:01.719955   46378 cli_runner.go:115] Run: docker container inspect kindnet-20210812175913-27878 --format={{.State.Status}}
	I0812 18:35:01.847396   46378 machine.go:88] provisioning docker machine ...
	I0812 18:35:01.847442   46378 ubuntu.go:169] provisioning hostname "kindnet-20210812175913-27878"
	I0812 18:35:01.847554   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:01.969596   46378 main.go:130] libmachine: Using SSH client type: native
	I0812 18:35:01.969806   46378 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60295 <nil> <nil>}
	I0812 18:35:01.969822   46378 main.go:130] libmachine: About to run SSH command:
	sudo hostname kindnet-20210812175913-27878 && echo "kindnet-20210812175913-27878" | sudo tee /etc/hostname
	I0812 18:35:01.970985   46378 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0812 18:35:05.103962   46378 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20210812175913-27878
	
	I0812 18:35:05.104071   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:05.228138   46378 main.go:130] libmachine: Using SSH client type: native
	I0812 18:35:05.228325   46378 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60295 <nil> <nil>}
	I0812 18:35:05.228343   46378 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20210812175913-27878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20210812175913-27878/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20210812175913-27878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 18:35:05.346013   46378 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0812 18:35:05.346033   46378 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0812 18:35:05.346047   46378 ubuntu.go:177] setting up certificates
	I0812 18:35:05.346054   46378 provision.go:83] configureAuth start
	I0812 18:35:05.346157   46378 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20210812175913-27878
	I0812 18:35:05.470164   46378 provision.go:137] copyHostCerts
	I0812 18:35:05.470261   46378 exec_runner.go:145] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0812 18:35:05.470274   46378 exec_runner.go:190] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0812 18:35:05.471301   46378 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0812 18:35:05.471480   46378 exec_runner.go:145] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0812 18:35:05.471492   46378 exec_runner.go:190] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0812 18:35:05.471549   46378 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0812 18:35:05.471694   46378 exec_runner.go:145] found /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0812 18:35:05.471700   46378 exec_runner.go:190] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0812 18:35:05.471753   46378 exec_runner.go:152] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0812 18:35:05.471874   46378 provision.go:111] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.kindnet-20210812175913-27878 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20210812175913-27878]
	I0812 18:35:05.512847   46378 provision.go:171] copyRemoteCerts
	I0812 18:35:05.513180   46378 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 18:35:05.513257   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:05.636465   46378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60295 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/kindnet-20210812175913-27878/id_rsa Username:docker}
	I0812 18:35:05.720569   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 18:35:05.738037   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 18:35:05.758531   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0812 18:35:05.778974   46378 provision.go:86] duration metric: configureAuth took 432.895104ms
	I0812 18:35:05.778987   46378 ubuntu.go:193] setting minikube options for container-runtime
	I0812 18:35:05.779223   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:05.905416   46378 main.go:130] libmachine: Using SSH client type: native
	I0812 18:35:05.905600   46378 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60295 <nil> <nil>}
	I0812 18:35:05.905608   46378 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0812 18:35:06.024377   46378 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0812 18:35:06.024392   46378 ubuntu.go:71] root file system type: overlay
	I0812 18:35:06.024554   46378 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0812 18:35:06.024655   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:06.149457   46378 main.go:130] libmachine: Using SSH client type: native
	I0812 18:35:06.149605   46378 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60295 <nil> <nil>}
	I0812 18:35:06.149663   46378 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0812 18:35:06.276132   46378 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0812 18:35:06.276238   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:06.399826   46378 main.go:130] libmachine: Using SSH client type: native
	I0812 18:35:06.399997   46378 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13fa2e0] 0x13fa2a0 <nil>  [] 0s} 127.0.0.1 60295 <nil> <nil>}
	I0812 18:35:06.400011   46378 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0812 18:35:36.420283   46378 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:54:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-13 01:35:06.288819156 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0812 18:35:36.420313   46378 machine.go:91] provisioned docker machine in 34.571966082s
	I0812 18:35:36.420323   46378 client.go:171] LocalClient.Create took 56.392421173s
	I0812 18:35:36.420347   46378 start.go:168] duration metric: libmachine.API.Create for "kindnet-20210812175913-27878" took 56.392473263s
	I0812 18:35:36.420366   46378 start.go:267] post-start starting for "kindnet-20210812175913-27878" (driver="docker")
	I0812 18:35:36.420375   46378 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 18:35:36.420509   46378 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 18:35:36.420629   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:36.545789   46378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60295 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/kindnet-20210812175913-27878/id_rsa Username:docker}
	I0812 18:35:36.631153   46378 ssh_runner.go:149] Run: cat /etc/os-release
	I0812 18:35:36.634956   46378 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0812 18:35:36.634970   46378 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0812 18:35:36.634977   46378 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0812 18:35:36.634984   46378 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0812 18:35:36.634993   46378 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0812 18:35:36.635089   46378 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0812 18:35:36.635688   46378 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem -> 278782.pem in /etc/ssl/certs
	I0812 18:35:36.635881   46378 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0812 18:35:36.643403   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem --> /etc/ssl/certs/278782.pem (1708 bytes)
	I0812 18:35:36.661059   46378 start.go:270] post-start completed in 240.67506ms
	I0812 18:35:36.661635   46378 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20210812175913-27878
	I0812 18:35:36.784369   46378 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/config.json ...
	I0812 18:35:36.784768   46378 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 18:35:36.784831   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:36.908767   46378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60295 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/kindnet-20210812175913-27878/id_rsa Username:docker}
	I0812 18:35:36.993807   46378 start.go:129] duration metric: createHost completed in 56.991900914s
	I0812 18:35:36.993823   46378 start.go:80] releasing machines lock for "kindnet-20210812175913-27878", held for 56.992023356s
	I0812 18:35:36.993931   46378 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20210812175913-27878
	I0812 18:35:37.117555   46378 ssh_runner.go:149] Run: systemctl --version
	I0812 18:35:37.117630   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:37.118622   46378 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0812 18:35:37.118874   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:37.248449   46378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60295 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/kindnet-20210812175913-27878/id_rsa Username:docker}
	I0812 18:35:37.248461   46378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60295 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/kindnet-20210812175913-27878/id_rsa Username:docker}
	I0812 18:35:37.436015   46378 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0812 18:35:37.445269   46378 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0812 18:35:37.455267   46378 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0812 18:35:37.455338   46378 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0812 18:35:37.464712   46378 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 18:35:37.477276   46378 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0812 18:35:37.534060   46378 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0812 18:35:37.589946   46378 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0812 18:35:37.599739   46378 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0812 18:35:37.656800   46378 ssh_runner.go:149] Run: sudo systemctl start docker
	I0812 18:35:37.666470   46378 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0812 18:35:37.710211   46378 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0812 18:35:37.805753   46378 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0812 18:35:37.805951   46378 cli_runner.go:115] Run: docker exec -t kindnet-20210812175913-27878 dig +short host.docker.internal
	I0812 18:35:38.001207   46378 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0812 18:35:38.002123   46378 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0812 18:35:38.006739   46378 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 18:35:38.016476   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:35:38.137611   46378 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 18:35:38.137715   46378 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 18:35:38.173424   46378 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0812 18:35:38.173441   46378 docker.go:466] Images already preloaded, skipping extraction
	I0812 18:35:38.173531   46378 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0812 18:35:38.208699   46378 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0812 18:35:38.208712   46378 cache_images.go:74] Images are preloaded, skipping loading
	I0812 18:35:38.208820   46378 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0812 18:35:38.291607   46378 cni.go:93] Creating CNI manager for "kindnet"
	I0812 18:35:38.291634   46378 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0812 18:35:38.291647   46378 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20210812175913-27878 NodeName:kindnet-20210812175913-27878 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0812 18:35:38.291752   46378 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20210812175913-27878"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 18:35:38.291846   46378 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20210812175913-27878 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:kindnet-20210812175913-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0812 18:35:38.291916   46378 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0812 18:35:38.300065   46378 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 18:35:38.300124   46378 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 18:35:38.307219   46378 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (406 bytes)
	I0812 18:35:38.319920   46378 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 18:35:38.332611   46378 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I0812 18:35:38.347236   46378 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0812 18:35:38.351199   46378 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 18:35:38.360883   46378 certs.go:52] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878 for IP: 192.168.58.2
	I0812 18:35:38.360979   46378 certs.go:179] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0812 18:35:38.361013   46378 certs.go:179] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0812 18:35:38.361102   46378 certs.go:294] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/client.key
	I0812 18:35:38.361112   46378 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/client.crt with IP's: []
	I0812 18:35:38.430368   46378 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/client.crt ...
	I0812 18:35:38.430383   46378 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/client.crt: {Name:mk53e098a424d8d294ce6318d8693313abc4095e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:35:38.431716   46378 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/client.key ...
	I0812 18:35:38.431733   46378 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/client.key: {Name:mk6818e0dab50702fde353d03dbd459f7697222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:35:38.432599   46378 certs.go:294] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.key.cee25041
	I0812 18:35:38.432612   46378 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0812 18:35:38.514947   46378 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.crt.cee25041 ...
	I0812 18:35:38.514966   46378 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.crt.cee25041: {Name:mk03acfcb345a55b3e9bee196c99e43993e3b059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:35:38.515214   46378 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.key.cee25041 ...
	I0812 18:35:38.515222   46378 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.key.cee25041: {Name:mk889503d37cb3eba29ff0d0c8cf6cac034e64a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:35:38.516156   46378 certs.go:305] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.crt
	I0812 18:35:38.516364   46378 certs.go:309] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.key
	I0812 18:35:38.516555   46378 certs.go:294] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/proxy-client.key
	I0812 18:35:38.516563   46378 crypto.go:69] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/proxy-client.crt with IP's: []
	I0812 18:35:38.592277   46378 crypto.go:157] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/proxy-client.crt ...
	I0812 18:35:38.592287   46378 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/proxy-client.crt: {Name:mkf62bea26cd4ba358005a0117979e3214bea2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:35:38.593871   46378 crypto.go:165] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/proxy-client.key ...
	I0812 18:35:38.593888   46378 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/proxy-client.key: {Name:mk836d971396b55a5f2e1df40da7af03fde49b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:35:38.594445   46378 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/27878.pem (1338 bytes)
	W0812 18:35:38.594499   46378 certs.go:369] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/27878_empty.pem, impossibly tiny 0 bytes
	I0812 18:35:38.594511   46378 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 18:35:38.594550   46378 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0812 18:35:38.594587   46378 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0812 18:35:38.594631   46378 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0812 18:35:38.594709   46378 certs.go:373] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem (1708 bytes)
	I0812 18:35:38.595529   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0812 18:35:38.612838   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 18:35:38.629366   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 18:35:38.646032   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210812175913-27878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 18:35:38.662791   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 18:35:38.679297   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0812 18:35:38.696118   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 18:35:38.713041   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0812 18:35:38.729813   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 18:35:38.746687   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/27878.pem --> /usr/share/ca-certificates/27878.pem (1338 bytes)
	I0812 18:35:38.763139   46378 ssh_runner.go:316] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/278782.pem --> /usr/share/ca-certificates/278782.pem (1708 bytes)
	I0812 18:35:38.780714   46378 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 18:35:38.794586   46378 ssh_runner.go:149] Run: openssl version
	I0812 18:35:38.800187   46378 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/278782.pem && ln -fs /usr/share/ca-certificates/278782.pem /etc/ssl/certs/278782.pem"
	I0812 18:35:38.808117   46378 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/278782.pem
	I0812 18:35:38.812275   46378 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 00:03 /usr/share/ca-certificates/278782.pem
	I0812 18:35:38.812324   46378 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278782.pem
	I0812 18:35:38.817714   46378 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/278782.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 18:35:38.825639   46378 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 18:35:38.834121   46378 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 18:35:38.838487   46378 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 00:01 /usr/share/ca-certificates/minikubeCA.pem
	I0812 18:35:38.838541   46378 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 18:35:38.844098   46378 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 18:35:38.851864   46378 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27878.pem && ln -fs /usr/share/ca-certificates/27878.pem /etc/ssl/certs/27878.pem"
	I0812 18:35:38.859680   46378 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/27878.pem
	I0812 18:35:38.864041   46378 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 00:03 /usr/share/ca-certificates/27878.pem
	I0812 18:35:38.864113   46378 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27878.pem
	I0812 18:35:38.870025   46378 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/27878.pem /etc/ssl/certs/51391683.0"
	I0812 18:35:38.878023   46378 kubeadm.go:390] StartCluster: {Name:kindnet-20210812175913-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:kindnet-20210812175913-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 18:35:38.878140   46378 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0812 18:35:38.912980   46378 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 18:35:38.920728   46378 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 18:35:38.928093   46378 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0812 18:35:38.928149   46378 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 18:35:38.935658   46378 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 18:35:38.935693   46378 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0812 18:35:39.648079   46378 out.go:204]   - Generating certificates and keys ...
	I0812 18:35:41.476362   46378 out.go:204]   - Booting up control plane ...
	I0812 18:36:03.010552   46378 out.go:204]   - Configuring RBAC rules ...
	I0812 18:36:03.423564   46378 cni.go:93] Creating CNI manager for "kindnet"
	I0812 18:36:03.451201   46378 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0812 18:36:03.451563   46378 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0812 18:36:03.457385   46378 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0812 18:36:03.457397   46378 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0812 18:36:03.476105   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0812 18:36:03.819213   46378 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 18:36:03.819309   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=kindnet-20210812175913-27878 minikube.k8s.io/updated_at=2021_08_12T18_36_03_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:03.819312   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:04.043211   46378 ops.go:34] apiserver oom_adj: -16
	I0812 18:36:04.043319   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:04.736065   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:05.235778   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:05.737375   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:06.237393   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:06.735560   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:07.235757   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:07.735595   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:08.235658   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:08.736110   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:09.236002   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:09.735639   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:10.235628   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:10.735684   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:11.235678   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:11.735692   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:12.237436   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:12.735827   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:13.234424   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:13.737316   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:14.233246   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:14.733330   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:15.233212   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:15.734589   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:16.235120   46378 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 18:36:16.350853   46378 kubeadm.go:985] duration metric: took 12.531291524s to wait for elevateKubeSystemPrivileges.
	I0812 18:36:16.350875   46378 kubeadm.go:392] StartCluster complete in 37.471846628s
	I0812 18:36:16.350900   46378 settings.go:142] acquiring lock: {Name:mk3e1d203e6439798c8d384e90b2bc232b4914ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:36:16.351000   46378 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 18:36:16.351709   46378 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mka81e290e52453cdddcec52ed4fa17d888b133f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 18:36:16.881809   46378 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20210812175913-27878" rescaled to 1
	I0812 18:36:16.881859   46378 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 18:36:16.881879   46378 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0812 18:36:16.881853   46378 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 18:36:16.881928   46378 addons.go:59] Setting storage-provisioner=true in profile "kindnet-20210812175913-27878"
	I0812 18:36:16.881939   46378 addons.go:59] Setting default-storageclass=true in profile "kindnet-20210812175913-27878"
	I0812 18:36:16.903722   46378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20210812175913-27878"
	I0812 18:36:16.903714   46378 addons.go:135] Setting addon storage-provisioner=true in "kindnet-20210812175913-27878"
	I0812 18:36:16.903672   46378 out.go:177] * Verifying Kubernetes components...
	W0812 18:36:16.903733   46378 addons.go:147] addon storage-provisioner should already be in state true
	I0812 18:36:16.903774   46378 host.go:66] Checking if "kindnet-20210812175913-27878" exists ...
	I0812 18:36:16.903794   46378 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 18:36:16.904194   46378 cli_runner.go:115] Run: docker container inspect kindnet-20210812175913-27878 --format={{.State.Status}}
	I0812 18:36:16.905908   46378 cli_runner.go:115] Run: docker container inspect kindnet-20210812175913-27878 --format={{.State.Status}}
	I0812 18:36:17.090113   46378 addons.go:135] Setting addon default-storageclass=true in "kindnet-20210812175913-27878"
	I0812 18:36:17.123562   46378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 18:36:17.095871   46378 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 18:36:17.095923   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	W0812 18:36:17.123582   46378 addons.go:147] addon default-storageclass should already be in state true
	I0812 18:36:17.123688   46378 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 18:36:17.123700   46378 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 18:36:17.123692   46378 host.go:66] Checking if "kindnet-20210812175913-27878" exists ...
	I0812 18:36:17.123818   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:36:17.130089   46378 cli_runner.go:115] Run: docker container inspect kindnet-20210812175913-27878 --format={{.State.Status}}
	I0812 18:36:17.324513   46378 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 18:36:17.324535   46378 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 18:36:17.324646   46378 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210812175913-27878
	I0812 18:36:17.324626   46378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60295 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/kindnet-20210812175913-27878/id_rsa Username:docker}
	I0812 18:36:17.331498   46378 node_ready.go:35] waiting up to 5m0s for node "kindnet-20210812175913-27878" to be "Ready" ...
	I0812 18:36:17.439432   46378 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 18:36:17.444189   46378 start.go:736] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0812 18:36:17.487778   46378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60295 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/kindnet-20210812175913-27878/id_rsa Username:docker}
	I0812 18:36:17.600974   46378 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 18:36:17.897290   46378 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0812 18:36:17.897306   46378 addons.go:344] enableAddons completed in 1.01540884s
	I0812 18:36:19.340133   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:21.345163   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:23.842270   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:26.340934   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:28.849202   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:31.346761   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:33.848559   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:36.341095   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:38.344790   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:40.846693   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:42.847976   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:45.341648   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:47.840294   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:49.840510   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:51.841845   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:53.842440   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:55.842605   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:36:58.340508   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:00.342620   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:02.841845   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:04.842925   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:07.341373   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:09.842017   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:12.344399   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:14.345383   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:16.843828   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:18.846323   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:21.344040   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:23.851763   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:26.341307   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:28.343505   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:30.841358   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:32.841570   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:34.847463   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:36.868530   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:39.344059   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:41.851343   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:43.855309   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:46.346834   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:48.348560   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:50.864434   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:53.343267   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:55.843986   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:37:58.346588   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:00.842427   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:02.843991   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:04.845100   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:07.346019   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:09.354294   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:11.843109   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:13.849844   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:16.344158   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:18.344435   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:20.844898   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:23.345527   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:25.349345   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:27.850989   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:29.851943   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:32.351639   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:34.845608   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:36.850928   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:39.350309   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:41.845383   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:43.850692   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:46.344749   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:48.345506   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:50.351553   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:52.845771   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:54.851248   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:57.346645   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:38:59.848449   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:39:01.850632   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:39:03.851557   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:39:06.350059   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:39:08.853800   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:39:11.344272   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"
	I0812 18:39:13.345125   46378 node_ready.go:58] node "kindnet-20210812175913-27878" has status "Ready":"False"

                                                
                                                
** /stderr **
net_test.go:100: failed start: signal: killed
--- FAIL: TestNetworkPlugins/group/kindnet/Start (274.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (334.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:38:02.272544   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136480677s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:38:15.163654   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:38:18.015979   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1584323s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:38:30.008638   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12883877s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155405846s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0812 18:38:57.789065   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
E0812 18:39:03.116041   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: signal: killed (14.311631377s)

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (3.76µs)

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.452µs)
E0812 18:39:32.605950   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.117µs)
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.375µs)
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.43µs)
E0812 18:40:43.291456   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:44.174334   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:44.180939   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:44.191414   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:44.212670   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:44.254557   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:44.337780   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:44.501358   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:44.824563   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:45.465525   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:46.746074   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:49.308125   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:40:54.428953   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.358µs)
E0812 18:41:04.670613   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:41:25.151561   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
E0812 18:42:06.112983   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210812175913-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.571µs)
E0812 18:42:13.889046   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 18:42:43.994378   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:43:01.142510   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 18:43:02.279595   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:43:15.175815   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:43:18.024217   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.868µs)
net_test.go:168: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:173: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (334.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
net_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubenet-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : context deadline exceeded (541ns)
net_test.go:100: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/kubenet/Start (0.00s)

                                                
                                    

Test pass (214/247)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 17.75
7 TestDownloadOnly/v1.14.0/kubectl 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.28
10 TestDownloadOnly/v1.21.3/json-events 8.01
11 TestDownloadOnly/v1.21.3/preload-exists 0
14 TestDownloadOnly/v1.21.3/kubectl 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.27
17 TestDownloadOnly/v1.22.0-rc.0/json-events 8.14
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
21 TestDownloadOnly/v1.22.0-rc.0/kubectl 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.28
23 TestDownloadOnly/DeleteAll 1.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.64
25 TestDownloadOnlyKic 7.34
26 TestOffline 123.14
29 TestDockerFlags 64.19
30 TestForceSystemdFlag 70.41
31 TestForceSystemdEnv 110.09
33 TestHyperKitDriverInstallOrUpdate 5.26
36 TestErrorSpam/setup 79.27
37 TestErrorSpam/start 2.27
38 TestErrorSpam/status 1.96
39 TestErrorSpam/pause 2.2
40 TestErrorSpam/unpause 2.23
41 TestErrorSpam/stop 12.83
44 TestFunctional/serial/CopySyncFile 0
45 TestFunctional/serial/StartWithProxy 121.8
46 TestFunctional/serial/AuditLog 0
47 TestFunctional/serial/SoftStart 7.45
48 TestFunctional/serial/KubeContext 0.04
49 TestFunctional/serial/KubectlGetPods 2.34
52 TestFunctional/serial/CacheCmd/cache/add_remote 6.54
53 TestFunctional/serial/CacheCmd/cache/add_local 2.13
54 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
55 TestFunctional/serial/CacheCmd/cache/list 0.07
56 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.71
57 TestFunctional/serial/CacheCmd/cache/cache_reload 3.73
58 TestFunctional/serial/CacheCmd/cache/delete 0.14
59 TestFunctional/serial/MinikubeKubectlCmd 0.48
60 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.56
61 TestFunctional/serial/ExtraConfig 43.22
62 TestFunctional/serial/ComponentHealth 0.06
63 TestFunctional/serial/LogsCmd 3.42
64 TestFunctional/serial/LogsFileCmd 3.55
66 TestFunctional/parallel/ConfigCmd 0.4
67 TestFunctional/parallel/DashboardCmd 4.01
68 TestFunctional/parallel/DryRun 1.39
69 TestFunctional/parallel/InternationalLanguage 0.63
70 TestFunctional/parallel/StatusCmd 2
74 TestFunctional/parallel/AddonsCmd 0.28
75 TestFunctional/parallel/PersistentVolumeClaim 25.45
77 TestFunctional/parallel/SSHCmd 1.33
78 TestFunctional/parallel/CpCmd 1.28
79 TestFunctional/parallel/MySQL 19.61
80 TestFunctional/parallel/FileSync 0.74
81 TestFunctional/parallel/CertSync 4.3
83 TestFunctional/parallel/DockerEnv 2.85
85 TestFunctional/parallel/NodeLabels 0.06
86 TestFunctional/parallel/LoadImage 2.75
87 TestFunctional/parallel/RemoveImage 3.16
88 TestFunctional/parallel/LoadImageFromFile 2.93
89 TestFunctional/parallel/BuildImage 4.1
90 TestFunctional/parallel/ListImages 0.45
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
93 TestFunctional/parallel/Version/short 0.09
94 TestFunctional/parallel/Version/components 1.17
95 TestFunctional/parallel/UpdateContextCmd/no_changes 0.37
96 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.89
97 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.36
99 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
101 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
102 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 13.64
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.86
105 TestFunctional/parallel/ProfileCmd/profile_list 0.77
106 TestFunctional/parallel/ProfileCmd/profile_json_output 0.79
107 TestFunctional/parallel/MountCmd/any-port 9.68
109 TestFunctional/parallel/MountCmd/specific-port 3.3
111 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
112 TestFunctional/delete_busybox_image 0.24
113 TestFunctional/delete_my-image_image 0.12
114 TestFunctional/delete_minikube_cached_images 0.12
118 TestJSONOutput/start/Audit 0
120 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
121 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
123 TestJSONOutput/pause/Audit 0
125 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
126 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
128 TestJSONOutput/unpause/Audit 0
130 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
131 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
133 TestJSONOutput/stop/Audit 0
135 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
136 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
137 TestErrorJSONOutput 0.78
139 TestKicCustomNetwork/create_custom_network 90.79
140 TestKicCustomNetwork/use_default_bridge_network 76.45
141 TestKicExistingNetwork 85.79
142 TestMainNoArgs 0.07
145 TestMultiNode/serial/FreshStart2Nodes 232.1
146 TestMultiNode/serial/DeployApp2Nodes 8.44
147 TestMultiNode/serial/PingHostFrom2Pods 0.92
148 TestMultiNode/serial/AddNode 111.12
149 TestMultiNode/serial/ProfileList 0.71
150 TestMultiNode/serial/CopyFile 5.31
151 TestMultiNode/serial/StopNode 11.26
152 TestMultiNode/serial/StartAfterStop 53.97
153 TestMultiNode/serial/RestartKeepsNodes 248.97
154 TestMultiNode/serial/DeleteNode 18.19
155 TestMultiNode/serial/StopMultiNode 35.06
156 TestMultiNode/serial/RestartMultiNode 149.91
157 TestMultiNode/serial/ValidateNameConflict 94.29
162 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
165 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
168 TestDebPackageInstall/install_amd64_debian:10/minikube 0
171 TestDebPackageInstall/install_amd64_debian:9/minikube 0
174 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
177 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
180 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
183 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
185 TestPreload 201.87
188 TestSkaffold 127.49
190 TestInsufficientStorage 60.77
191 TestRunningBinaryUpgrade 255.17
193 TestKubernetesUpgrade 176.85
194 TestMissingContainerUpgrade 188.11
196 TestPause/serial/Start 107.81
197 TestPause/serial/SecondStartNoReconfiguration 7.45
198 TestPause/serial/Pause 0.89
199 TestPause/serial/VerifyStatus 0.66
200 TestPause/serial/Unpause 0.88
201 TestPause/serial/PauseAgain 1.09
202 TestPause/serial/DeletePaused 15.5
203 TestPause/serial/VerifyDeletedResources 3.87
204 TestStoppedBinaryUpgrade/MinikubeLogs 2.77
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 12.05
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.03
226 TestStartStop/group/old-k8s-version/serial/FirstStart 132.93
228 TestStartStop/group/no-preload/serial/FirstStart 149.9
229 TestStartStop/group/old-k8s-version/serial/DeployApp 11.13
230 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.85
231 TestStartStop/group/old-k8s-version/serial/Stop 12.65
232 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
233 TestStartStop/group/old-k8s-version/serial/SecondStart 444.48
234 TestStartStop/group/no-preload/serial/DeployApp 11.95
235 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.9
236 TestStartStop/group/no-preload/serial/Stop 17.09
237 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.43
238 TestStartStop/group/no-preload/serial/SecondStart 375.08
239 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.02
240 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
241 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.97
242 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 7.18
243 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 1.04
244 TestStartStop/group/no-preload/serial/Pause 5.52
245 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.77
246 TestStartStop/group/old-k8s-version/serial/Pause 5.37
248 TestStartStop/group/embed-certs/serial/FirstStart 108.87
250 TestStartStop/group/default-k8s-different-port/serial/FirstStart 105.77
251 TestStartStop/group/embed-certs/serial/DeployApp 12.84
252 TestStartStop/group/default-k8s-different-port/serial/DeployApp 12.88
253 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
254 TestStartStop/group/embed-certs/serial/Stop 17.41
255 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.97
256 TestStartStop/group/default-k8s-different-port/serial/Stop 17.93
257 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.4
258 TestStartStop/group/embed-certs/serial/SecondStart 395.82
259 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.4
260 TestStartStop/group/default-k8s-different-port/serial/SecondStart 356.16
261 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 7.02
262 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 7.55
263 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.72
264 TestStartStop/group/default-k8s-different-port/serial/Pause 6.35
266 TestStartStop/group/newest-cni/serial/FirstStart 76.66
267 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
268 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 7.59
269 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.67
270 TestStartStop/group/embed-certs/serial/Pause 4.68
271 TestNetworkPlugins/group/auto/Start 118.3
272 TestStartStop/group/newest-cni/serial/DeployApp 0
273 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
274 TestStartStop/group/newest-cni/serial/Stop 16.46
275 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
276 TestStartStop/group/newest-cni/serial/SecondStart 42.37
277 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
278 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
279 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.71
280 TestStartStop/group/newest-cni/serial/Pause 6.45
281 TestNetworkPlugins/group/auto/KubeletFlags 0.71
282 TestNetworkPlugins/group/auto/NetCatPod 11.54
283 TestNetworkPlugins/group/false/Start 103.8
284 TestNetworkPlugins/group/auto/DNS 0.23
285 TestNetworkPlugins/group/auto/Localhost 0.19
286 TestNetworkPlugins/group/auto/HairPin 5.19
287 TestNetworkPlugins/group/cilium/Start 158.43
288 TestNetworkPlugins/group/false/KubeletFlags 0.67
289 TestNetworkPlugins/group/false/NetCatPod 12.54
290 TestNetworkPlugins/group/false/DNS 0.14
291 TestNetworkPlugins/group/false/Localhost 0.13
292 TestNetworkPlugins/group/false/HairPin 5.14
294 TestNetworkPlugins/group/cilium/ControllerPod 5.02
295 TestNetworkPlugins/group/cilium/KubeletFlags 0.66
296 TestNetworkPlugins/group/cilium/NetCatPod 13.51
297 TestNetworkPlugins/group/cilium/DNS 0.16
298 TestNetworkPlugins/group/cilium/Localhost 0.15
299 TestNetworkPlugins/group/cilium/HairPin 0.14
300 TestNetworkPlugins/group/custom-weave/Start 129.2
301 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.68
302 TestNetworkPlugins/group/custom-weave/NetCatPod 12.91
303 TestNetworkPlugins/group/enable-default-cni/Start 104.36
304 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.67
305 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.7
308 TestNetworkPlugins/group/bridge/Start 88.7
309 TestNetworkPlugins/group/bridge/KubeletFlags 0.72
310 TestNetworkPlugins/group/bridge/NetCatPod 17.59
x
+
TestDownloadOnly/v1.14.0/json-events (17.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20210812165933-27878 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20210812165933-27878 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker : (17.752594202s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (17.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
--- PASS: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20210812165933-27878
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20210812165933-27878: exit status 85 (279.068043ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 16:59:33
	Running on machine: 37310
	Binary: Built with gc go1.16.7 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 16:59:33.743277   27895 out.go:298] Setting OutFile to fd 1 ...
	I0812 16:59:33.743401   27895 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 16:59:33.743406   27895 out.go:311] Setting ErrFile to fd 2...
	I0812 16:59:33.743409   27895 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 16:59:33.743497   27895 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0812 16:59:33.743594   27895 root.go:291] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0812 16:59:33.744021   27895 out.go:305] Setting JSON to true
	I0812 16:59:33.763849   27895 start.go:111] hostinfo: {"hostname":"37310.local","uptime":10747,"bootTime":1628802026,"procs":337,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
	W0812 16:59:33.763941   27895 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0812 16:59:33.792476   27895 notify.go:169] Checking for updates...
	I0812 16:59:33.818616   27895 driver.go:335] Setting default libvirt URI to qemu:///system
	W0812 16:59:33.903720   27895 docker.go:108] docker version returned error: exit status 1
	I0812 16:59:33.945576   27895 start.go:278] selected driver: docker
	I0812 16:59:33.945597   27895 start.go:751] validating driver "docker" against <nil>
	I0812 16:59:33.945717   27895 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 16:59:34.109610   27895 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 16:59:34.162618   27895 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 16:59:34.327593   27895 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 16:59:34.354645   27895 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0812 16:59:34.406282   27895 start_flags.go:344] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0812 16:59:34.406381   27895 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 16:59:34.406402   27895 cni.go:93] Creating CNI manager for ""
	I0812 16:59:34.406410   27895 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0812 16:59:34.406416   27895 start_flags.go:277] config:
	{Name:download-only-20210812165933-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210812165933-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 16:59:34.432276   27895 cache.go:117] Beginning downloading kic base image for docker with docker
	I0812 16:59:34.458114   27895 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0812 16:59:34.458122   27895 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 16:59:34.458355   27895 cache.go:108] acquiring lock: {Name:mk41df554439775e3cc736bb8e1cf02f8b64502c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.458352   27895 cache.go:108] acquiring lock: {Name:mkb52fbf434f2b9e1de52e89d5a71bf5213d10a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.459690   27895 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/download-only-20210812165933-27878/config.json ...
	I0812 16:59:34.460336   27895 cache.go:108] acquiring lock: {Name:mke3c2f713f068cc2dea2fb3130e16f48f6a780c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.460337   27895 cache.go:108] acquiring lock: {Name:mk016f2286ee34d1d606a4a01822667ff9d02855 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.460681   27895 lock.go:36] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/download-only-20210812165933-27878/config.json: {Name:mkb4ae000d136854cc929b4a18e3c9c4fdfba2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 16:59:34.460554   27895 cache.go:108] acquiring lock: {Name:mk9b2f953d731d048a0ec389861ee5a53e4fa9c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.460568   27895 cache.go:108] acquiring lock: {Name:mk5f2ffdce52a799a54292f974ee284da0961522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.460475   27895 cache.go:108] acquiring lock: {Name:mk2a0687c63deb1dfffd790f37979cbef61fd0a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.460815   27895 cache.go:108] acquiring lock: {Name:mkef7c876cabcfc731761bcde93144613593f1a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.461048   27895 cache.go:108] acquiring lock: {Name:mkb5d3777771f5e87bc16a28c86cff1874c8b028 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.461152   27895 cache.go:108] acquiring lock: {Name:mkd149ef32401597915c67cabd0075d5ed6c7cf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 16:59:34.461395   27895 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.14.0
	I0812 16:59:34.461681   27895 image.go:133] retrieving image: k8s.gcr.io/coredns:1.3.1
	I0812 16:59:34.461703   27895 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.14.0
	I0812 16:59:34.461841   27895 image.go:133] retrieving image: k8s.gcr.io/pause:3.1
	I0812 16:59:34.461862   27895 image.go:133] retrieving image: k8s.gcr.io/etcd:3.3.10
	I0812 16:59:34.461893   27895 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.14.0
	I0812 16:59:34.462739   27895 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0812 16:59:34.462746   27895 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0812 16:59:34.462769   27895 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.14.0
	I0812 16:59:34.462984   27895 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0812 16:59:34.463010   27895 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 16:59:34.463469   27895 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.14.0/kubectl
	I0812 16:59:34.463465   27895 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.14.0/kubeadm
	I0812 16:59:34.463465   27895 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.14.0/kubelet
	I0812 16:59:34.471386   27895 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.14.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.471439   27895 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.14.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.472036   27895 image.go:175] daemon lookup for k8s.gcr.io/coredns:1.3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.474987   27895 image.go:175] daemon lookup for docker.io/kubernetesui/dashboard:v2.1.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.475005   27895 image.go:175] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.474997   27895 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.14.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.474999   27895 image.go:175] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.475118   27895 image.go:175] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.474999   27895 image.go:175] daemon lookup for k8s.gcr.io/etcd:3.3.10: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.476938   27895 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.14.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0812 16:59:34.618349   27895 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 to local cache
	I0812 16:59:34.618562   27895 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local cache directory
	I0812 16:59:34.618649   27895 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 to local cache
	I0812 16:59:35.456662   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0812 16:59:35.471011   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0812 16:59:35.510190   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0
	I0812 16:59:35.526479   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0812 16:59:35.731899   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I0812 16:59:36.092287   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0
	I0812 16:59:36.139153   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0812 16:59:36.139176   27895 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 1.680776886s
	I0812 16:59:36.139193   27895 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0812 16:59:36.295655   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0
	I0812 16:59:36.295702   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1
	I0812 16:59:36.295703   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0
	I0812 16:59:36.310396   27895 cache.go:162] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10
	I0812 16:59:36.506724   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0812 16:59:36.506745   27895 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 2.046438379s
	I0812 16:59:36.506755   27895 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0812 16:59:36.855712   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0812 16:59:36.855732   27895 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 2.394953935s
	I0812 16:59:36.855747   27895 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0812 16:59:37.210453   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0812 16:59:37.210474   27895 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.749887703s
	I0812 16:59:37.210487   27895 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0812 16:59:38.662428   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1 exists
	I0812 16:59:38.662449   27895 cache.go:97] cache image "k8s.gcr.io/coredns:1.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1" took 4.202182974s
	I0812 16:59:38.662458   27895 cache.go:81] save to tar file k8s.gcr.io/coredns:1.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1 succeeded
	I0812 16:59:38.886611   27895 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/darwin/v1.14.0/kubectl
	I0812 16:59:39.594252   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0 exists
	I0812 16:59:39.594279   27895 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0" took 5.134315231s
	I0812 16:59:39.594291   27895 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.14.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0 succeeded
	I0812 16:59:39.944720   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0 exists
	I0812 16:59:39.944741   27895 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0" took 5.484623936s
	I0812 16:59:39.944750   27895 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.14.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0 succeeded
	I0812 16:59:40.230491   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0 exists
	I0812 16:59:40.230514   27895 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0" took 5.771949917s
	I0812 16:59:40.230524   27895 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.14.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0 succeeded
	I0812 16:59:40.266711   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0 exists
	I0812 16:59:40.266736   27895 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0" took 5.808235196s
	I0812 16:59:40.266746   27895 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.14.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0 succeeded
	I0812 16:59:40.336090   27895 cache.go:157] /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10 exists
	I0812 16:59:40.336108   27895 cache.go:97] cache image "k8s.gcr.io/etcd:3.3.10" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10" took 5.876615677s
	I0812 16:59:40.336117   27895 cache.go:81] save to tar file k8s.gcr.io/etcd:3.3.10 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10 succeeded
	I0812 16:59:40.336128   27895 cache.go:88] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210812165933-27878"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (8.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20210812165933-27878 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20210812165933-27878 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=docker --driver=docker : (8.010820719s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (8.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
--- PASS: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20210812165933-27878
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20210812165933-27878: exit status 85 (274.23093ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 17:00:02
	Running on machine: 37310
	Binary: Built with gc go1.16.7 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 17:00:02.553016   27984 out.go:298] Setting OutFile to fd 1 ...
	I0812 17:00:02.553158   27984 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:00:02.553163   27984 out.go:311] Setting ErrFile to fd 2...
	I0812 17:00:02.553165   27984 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:00:02.553250   27984 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0812 17:00:02.553338   27984 root.go:291] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0812 17:00:02.553496   27984 out.go:305] Setting JSON to true
	I0812 17:00:02.573551   27984 start.go:111] hostinfo: {"hostname":"37310.local","uptime":10776,"bootTime":1628802026,"procs":342,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
	W0812 17:00:02.573643   27984 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0812 17:00:02.602668   27984 notify.go:169] Checking for updates...
	W0812 17:00:02.631154   27984 start.go:659] api.Load failed for download-only-20210812165933-27878: filestore "download-only-20210812165933-27878": Docker machine "download-only-20210812165933-27878" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 17:00:02.631239   27984 driver.go:335] Setting default libvirt URI to qemu:///system
	W0812 17:00:02.631280   27984 start.go:659] api.Load failed for download-only-20210812165933-27878: filestore "download-only-20210812165933-27878": Docker machine "download-only-20210812165933-27878" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 17:00:02.725948   27984 docker.go:132] docker version: linux-20.10.6
	I0812 17:00:02.726083   27984 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 17:00:02.904597   27984 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-13 00:00:02.842872654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 17:00:02.932461   27984 start.go:278] selected driver: docker
	I0812 17:00:02.932489   27984 start.go:751] validating driver "docker" against &{Name:download-only-20210812165933-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210812165933-27878 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 17:00:02.932953   27984 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 17:00:03.110264   27984 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-13 00:00:03.047748521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 17:00:03.112596   27984 cni.go:93] Creating CNI manager for ""
	I0812 17:00:03.112613   27984 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0812 17:00:03.112626   27984 start_flags.go:277] config:
	{Name:download-only-20210812165933-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210812165933-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 17:00:03.140595   27984 cache.go:117] Beginning downloading kic base image for docker with docker
	I0812 17:00:03.166672   27984 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 17:00:03.166671   27984 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 17:00:03.247317   27984 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0812 17:00:03.247357   27984 cache.go:56] Caching tarball of preloaded images
	I0812 17:00:03.247590   27984 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0812 17:00:03.276555   27984 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 ...
	I0812 17:00:03.291107   27984 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 17:00:03.291123   27984 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0812 17:00:03.410245   27984 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4?checksum=md5:3231aae7a1f1d991e6e500ed4461f6b3 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210812165933-27878"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (8.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20210812165933-27878 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20210812165933-27878 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=docker --driver=docker : (8.136757125s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (8.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20210812165933-27878
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20210812165933-27878: exit status 85 (277.072148ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 17:00:10
	Running on machine: 37310
	Binary: Built with gc go1.16.7 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 17:00:10.835649   28018 out.go:298] Setting OutFile to fd 1 ...
	I0812 17:00:10.835774   28018 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:00:10.835781   28018 out.go:311] Setting ErrFile to fd 2...
	I0812 17:00:10.835784   28018 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:00:10.835861   28018 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0812 17:00:10.835951   28018 root.go:291] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0812 17:00:10.836093   28018 out.go:305] Setting JSON to true
	I0812 17:00:10.854867   28018 start.go:111] hostinfo: {"hostname":"37310.local","uptime":10784,"bootTime":1628802026,"procs":346,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
	W0812 17:00:10.854952   28018 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0812 17:00:10.882626   28018 notify.go:169] Checking for updates...
	W0812 17:00:10.909750   28018 start.go:659] api.Load failed for download-only-20210812165933-27878: filestore "download-only-20210812165933-27878": Docker machine "download-only-20210812165933-27878" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 17:00:10.909854   28018 driver.go:335] Setting default libvirt URI to qemu:///system
	W0812 17:00:10.909911   28018 start.go:659] api.Load failed for download-only-20210812165933-27878: filestore "download-only-20210812165933-27878": Docker machine "download-only-20210812165933-27878" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 17:00:11.006752   28018 docker.go:132] docker version: linux-20.10.6
	I0812 17:00:11.006873   28018 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 17:00:11.181029   28018 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-13 00:00:11.126423992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 17:00:11.207751   28018 start.go:278] selected driver: docker
	I0812 17:00:11.207767   28018 start.go:751] validating driver "docker" against &{Name:download-only-20210812165933-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210812165933-27878 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 17:00:11.208015   28018 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 17:00:11.386171   28018 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-13 00:00:11.329726134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 17:00:11.388490   28018 cni.go:93] Creating CNI manager for ""
	I0812 17:00:11.388510   28018 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0812 17:00:11.388516   28018 start_flags.go:277] config:
	{Name:download-only-20210812165933-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210812165933-27878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 17:00:11.415732   28018 cache.go:117] Beginning downloading kic base image for docker with docker
	I0812 17:00:11.442366   28018 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 17:00:11.442363   28018 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0812 17:00:11.526682   28018 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0812 17:00:11.526722   28018 cache.go:56] Caching tarball of preloaded images
	I0812 17:00:11.527044   28018 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0812 17:00:11.553896   28018 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0812 17:00:11.564586   28018 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 17:00:11.564598   28018 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0812 17:00:11.652281   28018 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:24e0063355d7da59de0c5d619223de56 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210812165933-27878"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:189: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.1350009s)
--- PASS: TestDownloadOnly/DeleteAll (1.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20210812165933-27878
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.64s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.34s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20210812170021-27878 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:226: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20210812170021-27878 --force --alsologtostderr --driver=docker : (5.790552674s)
helpers_test.go:176: Cleaning up "download-docker-20210812170021-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20210812170021-27878
--- PASS: TestDownloadOnlyKic (7.34s)

                                                
                                    
x
+
TestOffline (123.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20210812174942-27878 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20210812174942-27878 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m47.101632002s)
helpers_test.go:176: Cleaning up "offline-docker-20210812174942-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20210812174942-27878

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20210812174942-27878: (16.037931061s)
--- PASS: TestOffline (123.14s)

                                                
                                    
x
+
TestDockerFlags (64.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20210812175922-27878 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
* Starting control plane node minikube in cluster minikube
* Download complete!

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20210812175922-27878 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (50.447890235s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20210812175922-27878 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20210812175922-27878 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20210812175922-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20210812175922-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20210812175922-27878: (12.376538945s)
--- PASS: TestDockerFlags (64.19s)

                                                
                                    
x
+
TestForceSystemdFlag (70.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20210812175936-27878 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20210812175936-27878 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (1m3.154184127s)
docker_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20210812175936-27878 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-20210812175936-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20210812175936-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20210812175936-27878: (6.545842795s)
--- PASS: TestForceSystemdFlag (70.41s)

                                                
                                    
x
+
TestForceSystemdEnv (110.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20210812175727-27878 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0812 17:58:17.981726   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:58:45.720541   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
docker_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20210812175727-27878 --memory=2048 --alsologtostderr -v=5 --driver=docker : (1m33.580009473s)
docker_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20210812175727-27878 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20210812175727-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20210812175727-27878

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20210812175727-27878: (15.751549851s)
--- PASS: TestForceSystemdEnv (110.09s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (5.26s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
* minikube v1.22.0 on darwin
- MINIKUBE_LOCATION=12230
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current133634929
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current133634929/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current133634929/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current133634929/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
--- PASS: TestHyperKitDriverInstallOrUpdate (5.26s)

                                                
                                    
x
+
TestErrorSpam/setup (79.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20210812170159-27878 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20210812170159-27878 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 --driver=docker : (1m19.265674966s)
error_spam_test.go:88: acceptable stderr: "! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.21.3."
--- PASS: TestErrorSpam/setup (79.27s)

                                                
                                    
x
+
TestErrorSpam/start (2.27s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 start --dry-run
--- PASS: TestErrorSpam/start (2.27s)

                                                
                                    
x
+
TestErrorSpam/status (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 status
--- PASS: TestErrorSpam/status (1.96s)

                                                
                                    
x
+
TestErrorSpam/pause (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 pause
--- PASS: TestErrorSpam/pause (2.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.23s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 unpause
--- PASS: TestErrorSpam/unpause (2.23s)

                                                
                                    
x
+
TestErrorSpam/stop (12.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 stop: (12.07509578s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20210812170159-27878 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20210812170159-27878 stop
--- PASS: TestErrorSpam/stop (12.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/test/nested/copy/27878/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (121.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:1982: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (2m1.794724613s)
--- PASS: TestFunctional/serial/StartWithProxy (121.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --alsologtostderr -v=8: (7.450241664s)
functional_test.go:631: soft start took 7.450756833s for "functional-20210812170347-27878" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210812170347-27878 get po -A
functional_test.go:660: (dbg) Done: kubectl --context functional-20210812170347-27878 get po -A: (2.340153848s)
--- PASS: TestFunctional/serial/KubectlGetPods (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Done: out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache add k8s.gcr.io/pause:3.1: (1.527929625s)
functional_test.go:982: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache add k8s.gcr.io/pause:3.3: (2.594192979s)
functional_test.go:982: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache add k8s.gcr.io/pause:latest: (2.418164466s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210812170347-27878 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20210812170347-27878347983269
functional_test.go:1024: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache add minikube-local-cache-test:functional-20210812170347-27878
functional_test.go:1024: (dbg) Done: out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache add minikube-local-cache-test:functional-20210812170347-27878: (1.533148242s)
functional_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache delete minikube-local-cache-test:functional-20210812170347-27878
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210812170347-27878
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (630.131151ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-darwin-amd64 -p functional-20210812170347-27878 cache reload: (1.762164829s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 kubectl -- --context functional-20210812170347-27878 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210812170347-27878 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:715: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.21981665s)
functional_test.go:719: restart took 43.21992202s for "functional-20210812170347-27878" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210812170347-27878 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 logs
functional_test.go:1165: (dbg) Done: out/minikube-darwin-amd64 -p functional-20210812170347-27878 logs: (3.417808251s)
--- PASS: TestFunctional/serial/LogsCmd (3.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20210812170347-27878654264512/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-darwin-amd64 -p functional-20210812170347-27878 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20210812170347-27878654264512/logs.txt: (3.548470785s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20210812170347-27878 config get cpus: exit status 14 (46.900414ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 config unset cpus
functional_test.go:1129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20210812170347-27878 config get cpus: exit status 14 (44.036882ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20210812170347-27878 --alsologtostderr -v=1]
2021/08/12 17:08:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:862: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20210812170347-27878 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 30671: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:919: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (615.221534ms)

                                                
                                                
-- stdout --
	* [functional-20210812170347-27878] minikube v1.22.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 17:08:18.394776   30597 out.go:298] Setting OutFile to fd 1 ...
	I0812 17:08:18.394917   30597 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:08:18.394922   30597 out.go:311] Setting ErrFile to fd 2...
	I0812 17:08:18.394925   30597 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:08:18.395012   30597 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 17:08:18.395271   30597 out.go:305] Setting JSON to false
	I0812 17:08:18.415800   30597 start.go:111] hostinfo: {"hostname":"37310.local","uptime":11272,"bootTime":1628802026,"procs":337,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
	W0812 17:08:18.416516   30597 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0812 17:08:18.443498   30597 out.go:177] * [functional-20210812170347-27878] minikube v1.22.0 on Darwin 11.2.3
	I0812 17:08:18.491373   30597 out.go:177]   - MINIKUBE_LOCATION=12230
	I0812 17:08:18.516950   30597 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 17:08:18.543213   30597 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0812 17:08:18.569106   30597 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 17:08:18.570083   30597 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 17:08:18.669582   30597 docker.go:132] docker version: linux-20.10.6
	I0812 17:08:18.669758   30597 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 17:08:18.866974   30597 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-13 00:08:18.791440226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 17:08:18.894197   30597 out.go:177] * Using the docker driver based on existing profile
	I0812 17:08:18.894249   30597 start.go:278] selected driver: docker
	I0812 17:08:18.894269   30597 start.go:751] validating driver "docker" against &{Name:functional-20210812170347-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210812170347-27878 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-pro
visioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 17:08:18.894449   30597 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0812 17:08:18.923489   30597 out.go:177] 
	W0812 17:08:18.923639   30597 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0812 17:08:18.949361   30597 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:956: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20210812170347-27878 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (626.612557ms)

                                                
                                                
-- stdout --
	* [functional-20210812170347-27878] minikube v1.22.0 sur Darwin 11.2.3
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 17:08:19.782080   30635 out.go:298] Setting OutFile to fd 1 ...
	I0812 17:08:19.782202   30635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:08:19.782207   30635 out.go:311] Setting ErrFile to fd 2...
	I0812 17:08:19.782210   30635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:08:19.782324   30635 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 17:08:19.782571   30635 out.go:305] Setting JSON to false
	I0812 17:08:19.801384   30635 start.go:111] hostinfo: {"hostname":"37310.local","uptime":11273,"bootTime":1628802026,"procs":333,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"}
	W0812 17:08:19.801484   30635 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0812 17:08:19.828501   30635 out.go:177] * [functional-20210812170347-27878] minikube v1.22.0 sur Darwin 11.2.3
	I0812 17:08:19.875358   30635 out.go:177]   - MINIKUBE_LOCATION=12230
	I0812 17:08:19.901409   30635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 17:08:19.927235   30635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0812 17:08:19.953192   30635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 17:08:19.954229   30635 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 17:08:20.057762   30635 docker.go:132] docker version: linux-20.10.6
	I0812 17:08:20.057887   30635 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 17:08:20.245749   30635 info.go:263] docker info: {ID:NUVB:KIYS:WZ5S:BBBQ:I5K6:TSUW:ISZD:Z2IF:JI5D:OMPC:DPCS:TFBF Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-13 00:08:20.173721731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0812 17:08:20.272664   30635 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0812 17:08:20.272705   30635 start.go:278] selected driver: docker
	I0812 17:08:20.272724   30635 start.go:751] validating driver "docker" against &{Name:functional-20210812170347-27878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210812170347-27878 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-pro
visioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 17:08:20.272886   30635 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0812 17:08:20.320555   30635 out.go:177] 
	W0812 17:08:20.320800   30635 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0812 17:08:20.346541   30635 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 status
functional_test.go:815: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [6bf74d04-0791-4e8f-bac8-668449a7ca2b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.017196541s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210812170347-27878 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210812170347-27878 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210812170347-27878 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210812170347-27878 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [586200b5-e9b0-4052-8679-137131aec718] Pending
helpers_test.go:343: "sp-pod" [586200b5-e9b0-4052-8679-137131aec718] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [586200b5-e9b0-4052-8679-137131aec718] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.017745735s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210812170347-27878 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210812170347-27878 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210812170347-27878 delete -f testdata/storage-provisioner/pod.yaml: (1.505144582s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210812170347-27878 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [1a63a027-5ece-4b3b-ba28-c91a8effb779] Pending
helpers_test.go:343: "sp-pod" [1a63a027-5ece-4b3b-ba28-c91a8effb779] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [1a63a027-5ece-4b3b-ba28-c91a8effb779] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010802178s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210812170347-27878 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "echo hello"
functional_test.go:1515: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210812170347-27878 replace --force -f testdata/mysql.yaml
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-9bbbc5bbb-qldsz" [f772f1d0-b795-411d-a6e2-ae5d339bf0fd] Pending
helpers_test.go:343: "mysql-9bbbc5bbb-qldsz" [f772f1d0-b795-411d-a6e2-ae5d339bf0fd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-qldsz" [f772f1d0-b795-411d-a6e2-ae5d339bf0fd] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.017635728s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210812170347-27878 exec mysql-9bbbc5bbb-qldsz -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210812170347-27878 exec mysql-9bbbc5bbb-qldsz -- mysql -ppassword -e "show databases;": exit status 1 (167.331469ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210812170347-27878 exec mysql-9bbbc5bbb-qldsz -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210812170347-27878 exec mysql-9bbbc5bbb-qldsz -- mysql -ppassword -e "show databases;": exit status 1 (134.334937ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210812170347-27878 exec mysql-9bbbc5bbb-qldsz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/27878/hosts within VM

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1679: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo cat /etc/test/nested/copy/27878/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/27878.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo cat /etc/ssl/certs/27878.pem"
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/27878.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo cat /usr/share/ca-certificates/27878.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/278782.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo cat /etc/ssl/certs/278782.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/278782.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo cat /usr/share/ca-certificates/278782.pem"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20210812170347-27878 docker-env) && out/minikube-darwin-amd64 status -p functional-20210812170347-27878"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20210812170347-27878 docker-env) && out/minikube-darwin-amd64 status -p functional-20210812170347-27878": (1.691157979s)
functional_test.go:503: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20210812170347-27878 docker-env) && docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:503: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20210812170347-27878 docker-env) && docker images": (1.159252437s)
--- PASS: TestFunctional/parallel/DockerEnv (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210812170347-27878 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Done: docker pull busybox:1.33: (1.099642763s)
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210812170347-27878
functional_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 image load docker.io/library/busybox:load-functional-20210812170347-27878

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:373: (dbg) Run:  out/minikube-darwin-amd64 ssh -p functional-20210812170347-27878 -- docker image inspect docker.io/library/busybox:load-functional-20210812170347-27878
--- PASS: TestFunctional/parallel/LoadImage (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Done: docker pull busybox:1.32: (1.066507292s)
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210812170347-27878
functional_test.go:344: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 image load docker.io/library/busybox:remove-functional-20210812170347-27878
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 image rm docker.io/library/busybox:remove-functional-20210812170347-27878

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 ssh -p functional-20210812170347-27878 -- docker images
--- PASS: TestFunctional/parallel/RemoveImage (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31
functional_test.go:279: (dbg) Done: docker pull busybox:1.31: (1.197998936s)
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210812170347-27878
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210812170347-27878
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 image load /Users/jenkins/workspace/busybox.tar
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 ssh -p functional-20210812170347-27878 -- docker images
--- PASS: TestFunctional/parallel/LoadImageFromFile (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 image build -t localhost/my-image:functional-20210812170347-27878 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-darwin-amd64 -p functional-20210812170347-27878 image build -t localhost/my-image:functional-20210812170347-27878 testdata/build: (3.432452874s)
functional_test.go:412: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20210812170347-27878 image build -t localhost/my-image:functional-20210812170347-27878 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM busybox
latest: Pulling from library/busybox
b71f96345d44: Pulling fs layer
b71f96345d44: Verifying Checksum
b71f96345d44: Download complete
b71f96345d44: Pull complete
Digest: sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
Status: Downloaded newer image for busybox:latest
---> 69593048aa3a
Step 2/3 : RUN true
---> Running in ef9f3baa2460
Removing intermediate container ef9f3baa2460
---> 0b0313c691e9
Step 3/3 : ADD content.txt /
---> 900b9f66dba5
Successfully built 900b9f66dba5
Successfully tagged localhost/my-image:functional-20210812170347-27878
functional_test.go:373: (dbg) Run:  out/minikube-darwin-amd64 ssh -p functional-20210812170347-27878 -- docker image inspect localhost/my-image:functional-20210812170347-27878
--- PASS: TestFunctional/parallel/BuildImage (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20210812170347-27878 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210812170347-27878
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
--- PASS: TestFunctional/parallel/ListImages (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo systemctl is-active crio": exit status 1 (672.464082ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 version -o=json --components
functional_test.go:2016: (dbg) Done: out/minikube-darwin-amd64 -p functional-20210812170347-27878 version -o=json --components: (1.167591645s)
--- PASS: TestFunctional/parallel/Version/components (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20210812170347-27878 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210812170347-27878 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (13.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (13.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1245: Took "697.579823ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1259: Took "67.265219ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1295: Took "691.456785ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1308: Took "93.450863ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20210812170347-27878 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest361767199:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628813285370275000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest361767199/created-by-test
functional_test_mount_test.go:110: wrote "test-1628813285370275000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest361767199/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628813285370275000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest361767199/test-1628813285370275000
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (772.469589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 13 00:08 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 13 00:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 13 00:08 test-1628813285370275000
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh cat /mount-9p/test-1628813285370275000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210812170347-27878 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [9d057607-c6cc-49f6-823b-f1ab7ddb66be] Pending
helpers_test.go:343: "busybox-mount" [9d057607-c6cc-49f6-823b-f1ab7ddb66be] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:343: "busybox-mount" [9d057607-c6cc-49f6-823b-f1ab7ddb66be] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.017899699s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210812170347-27878 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20210812170347-27878 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest361767199:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20210812170347-27878 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest800551922:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (757.739899ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20210812170347-27878 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest800551922:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh "sudo umount -f /mount-9p": exit status 1 (613.822084ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-darwin-amd64 -p functional-20210812170347-27878 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20210812170347-27878 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest800551922:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20210812170347-27878 tunnel --alsologtostderr] ...
helpers_test.go:501: unable to terminate pid 30200: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.24s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210812170347-27878
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210812170347-27878
--- PASS: TestFunctional/delete_busybox_image (0.24s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210812170347-27878
--- PASS: TestFunctional/delete_my-image_image (0.12s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210812170347-27878
--- PASS: TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20210812171142-27878 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20210812171142-27878 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (116.898459ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210812171142-27878] minikube v1.22.0 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"e0367459-5d00-4d7d-997a-83ebd54c991f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"a58bc8ed-dbdd-476b-9c11-dbdfad9e0f60","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig"},"datacontenttype":"application/json","id":"8a3b997e-5c48-4771-add0-ff6fb1c55cf9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"},"datacontenttype":"application/json","id":"99cb2cf0-a112-4c6f-bb80-148e2151bab0","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube"},"datacontenttype":"application/json","id":"07512a9f-d983-4a11-9f8a-50de2cc391ee","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"8449d92b-8551-422e-84a8-5511c82db2bb","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210812171142-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20210812171142-27878
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (90.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20210812171143-27878 --network=
E0812 17:12:13.790089   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:13.797307   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:13.807572   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:13.832931   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:13.880002   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:13.964988   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:14.130014   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:14.451803   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:15.093111   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:16.381546   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:18.942007   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:24.063215   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:34.303803   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:12:54.790000   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20210812171143-27878 --network=: (1m17.608804162s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210812171143-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20210812171143-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20210812171143-27878: (13.065618156s)
--- PASS: TestKicCustomNetwork/create_custom_network (90.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (76.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20210812171313-27878 --network=bridge
E0812 17:13:35.752431   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20210812171313-27878 --network=bridge: (1m6.90759782s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210812171313-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20210812171313-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20210812171313-27878: (9.417567903s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (76.45s)

                                                
                                    
x
+
TestKicExistingNetwork (85.79s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20210812171435-27878 --network=existing-network
E0812 17:14:57.683637   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20210812171435-27878 --network=existing-network: (1m7.45084121s)
helpers_test.go:176: Cleaning up "existing-network-20210812171435-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20210812171435-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20210812171435-27878: (12.911665142s)
--- PASS: TestKicExistingNetwork (85.79s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (232.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20210812171556-27878 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0812 17:17:13.795445   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 17:17:41.531208   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20210812171556-27878 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (3m50.983979107s)
multinode_test.go:87: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr
multinode_test.go:87: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr: (1.11881962s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (232.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:462: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (2.547698415s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- rollout status deployment/busybox: (4.239521716s)
multinode_test.go:473: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-7vkwj -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-j9l2d -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-7vkwj -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-j9l2d -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-7vkwj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-j9l2d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-7vkwj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-7vkwj -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-j9l2d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20210812171556-27878 -- exec busybox-84b6686758-j9l2d -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (111.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20210812171556-27878 -v 3 --alsologtostderr
multinode_test.go:106: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20210812171556-27878 -v 3 --alsologtostderr: (1m49.549955086s)
multinode_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr
multinode_test.go:112: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr: (1.566563147s)
--- PASS: TestMultiNode/serial/AddNode (111.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --output json --alsologtostderr
multinode_test.go:169: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --output json --alsologtostderr: (1.550566927s)
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 cp testdata/cp-test.txt multinode-20210812171556-27878-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 ssh -n multinode-20210812171556-27878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 cp testdata/cp-test.txt multinode-20210812171556-27878-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 ssh -n multinode-20210812171556-27878-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (11.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 node stop m03: (8.790222s)
multinode_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status: exit status 7 (1.23363597s)

                                                
                                                
-- stdout --
	multinode-20210812171556-27878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210812171556-27878-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210812171556-27878-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr: exit status 7 (1.239221404s)

                                                
                                                
-- stdout --
	multinode-20210812171556-27878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210812171556-27878-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210812171556-27878-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 17:22:04.830960   33102 out.go:298] Setting OutFile to fd 1 ...
	I0812 17:22:04.831071   33102 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:22:04.831076   33102 out.go:311] Setting ErrFile to fd 2...
	I0812 17:22:04.831078   33102 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:22:04.831156   33102 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 17:22:04.831338   33102 out.go:305] Setting JSON to false
	I0812 17:22:04.831353   33102 mustload.go:65] Loading cluster: multinode-20210812171556-27878
	I0812 17:22:04.831583   33102 status.go:253] checking status of multinode-20210812171556-27878 ...
	I0812 17:22:04.831943   33102 cli_runner.go:115] Run: docker container inspect multinode-20210812171556-27878 --format={{.State.Status}}
	I0812 17:22:04.950490   33102 status.go:328] multinode-20210812171556-27878 host status = "Running" (err=<nil>)
	I0812 17:22:04.950517   33102 host.go:66] Checking if "multinode-20210812171556-27878" exists ...
	I0812 17:22:04.950828   33102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210812171556-27878
	I0812 17:22:05.070838   33102 host.go:66] Checking if "multinode-20210812171556-27878" exists ...
	I0812 17:22:05.071230   33102 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 17:22:05.071315   33102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210812171556-27878
	I0812 17:22:05.190404   33102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50957 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210812171556-27878/id_rsa Username:docker}
	I0812 17:22:05.281591   33102 ssh_runner.go:149] Run: systemctl --version
	I0812 17:22:05.286383   33102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 17:22:05.295645   33102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210812171556-27878
	I0812 17:22:05.415585   33102 kubeconfig.go:93] found "multinode-20210812171556-27878" server: "https://127.0.0.1:50956"
	I0812 17:22:05.415607   33102 api_server.go:164] Checking apiserver status ...
	I0812 17:22:05.415646   33102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 17:22:05.430919   33102 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2039/cgroup
	I0812 17:22:05.438667   33102 api_server.go:180] apiserver freezer: "7:freezer:/docker/28bf8bae46fae0e2feaa0f154bd4a664e0e97fe0286012c55d6d862df5a80f01/kubepods/burstable/pod763fee13bd302bbb539922b1de812899/0221fde577fbf48861012a6b455b3295ca33a29f0a5be53668f1b6f406844d4d"
	I0812 17:22:05.438738   33102 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/28bf8bae46fae0e2feaa0f154bd4a664e0e97fe0286012c55d6d862df5a80f01/kubepods/burstable/pod763fee13bd302bbb539922b1de812899/0221fde577fbf48861012a6b455b3295ca33a29f0a5be53668f1b6f406844d4d/freezer.state
	I0812 17:22:05.446249   33102 api_server.go:202] freezer state: "THAWED"
	I0812 17:22:05.446313   33102 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:50956/healthz ...
	I0812 17:22:05.452350   33102 api_server.go:265] https://127.0.0.1:50956/healthz returned 200:
	ok
	I0812 17:22:05.452362   33102 status.go:419] multinode-20210812171556-27878 apiserver status = Running (err=<nil>)
	I0812 17:22:05.452370   33102 status.go:255] multinode-20210812171556-27878 status: &{Name:multinode-20210812171556-27878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 17:22:05.452388   33102 status.go:253] checking status of multinode-20210812171556-27878-m02 ...
	I0812 17:22:05.452680   33102 cli_runner.go:115] Run: docker container inspect multinode-20210812171556-27878-m02 --format={{.State.Status}}
	I0812 17:22:05.570784   33102 status.go:328] multinode-20210812171556-27878-m02 host status = "Running" (err=<nil>)
	I0812 17:22:05.570806   33102 host.go:66] Checking if "multinode-20210812171556-27878-m02" exists ...
	I0812 17:22:05.571089   33102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210812171556-27878-m02
	I0812 17:22:05.694190   33102 host.go:66] Checking if "multinode-20210812171556-27878-m02" exists ...
	I0812 17:22:05.694476   33102 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 17:22:05.694540   33102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210812171556-27878-m02
	I0812 17:22:05.813196   33102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51301 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210812171556-27878-m02/id_rsa Username:docker}
	I0812 17:22:05.900719   33102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 17:22:05.909987   33102 status.go:255] multinode-20210812171556-27878-m02 status: &{Name:multinode-20210812171556-27878-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0812 17:22:05.910011   33102 status.go:253] checking status of multinode-20210812171556-27878-m03 ...
	I0812 17:22:05.910323   33102 cli_runner.go:115] Run: docker container inspect multinode-20210812171556-27878-m03 --format={{.State.Status}}
	I0812 17:22:06.028903   33102 status.go:328] multinode-20210812171556-27878-m03 host status = "Stopped" (err=<nil>)
	I0812 17:22:06.028932   33102 status.go:341] host is not running, skipping remaining checks
	I0812 17:22:06.028937   33102 status.go:255] multinode-20210812171556-27878-m03 status: &{Name:multinode-20210812171556-27878-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (11.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (53.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 node start m03 --alsologtostderr
E0812 17:22:13.802316   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
multinode_test.go:235: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 node start m03 --alsologtostderr: (52.213485684s)
multinode_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status
multinode_test.go:242: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status: (1.594409886s)
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (53.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (248.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20210812171556-27878
multinode_test.go:271: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20210812171556-27878
multinode_test.go:271: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20210812171556-27878: (38.085257582s)
multinode_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20210812171556-27878 --wait=true -v=8 --alsologtostderr
multinode_test.go:276: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20210812171556-27878 --wait=true -v=8 --alsologtostderr: (3m30.790245739s)
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20210812171556-27878
--- PASS: TestMultiNode/serial/RestartKeepsNodes (248.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (18.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 node delete m03
E0812 17:27:13.812149   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
multinode_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 node delete m03: (14.56079025s)
multinode_test.go:381: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr
multinode_test.go:381: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr: (1.128455296s)
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:405: (dbg) Done: kubectl get nodes: (2.329763333s)
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (18.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (35.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 stop
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 stop: (34.521669323s)
multinode_test.go:301: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status: exit status 7 (266.774993ms)

                                                
                                                
-- stdout --
	multinode-20210812171556-27878
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210812171556-27878-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr: exit status 7 (268.911794ms)

                                                
                                                
-- stdout --
	multinode-20210812171556-27878
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210812171556-27878-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 17:28:01.999551   34061 out.go:298] Setting OutFile to fd 1 ...
	I0812 17:28:01.999891   34061 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:28:01.999896   34061 out.go:311] Setting ErrFile to fd 2...
	I0812 17:28:01.999899   34061 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 17:28:01.999972   34061 root.go:313] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 17:28:02.000151   34061 out.go:305] Setting JSON to false
	I0812 17:28:02.000166   34061 mustload.go:65] Loading cluster: multinode-20210812171556-27878
	I0812 17:28:02.000404   34061 status.go:253] checking status of multinode-20210812171556-27878 ...
	I0812 17:28:02.000784   34061 cli_runner.go:115] Run: docker container inspect multinode-20210812171556-27878 --format={{.State.Status}}
	I0812 17:28:02.114091   34061 status.go:328] multinode-20210812171556-27878 host status = "Stopped" (err=<nil>)
	I0812 17:28:02.114121   34061 status.go:341] host is not running, skipping remaining checks
	I0812 17:28:02.114127   34061 status.go:255] multinode-20210812171556-27878 status: &{Name:multinode-20210812171556-27878 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 17:28:02.114161   34061 status.go:253] checking status of multinode-20210812171556-27878-m02 ...
	I0812 17:28:02.114492   34061 cli_runner.go:115] Run: docker container inspect multinode-20210812171556-27878-m02 --format={{.State.Status}}
	I0812 17:28:02.227283   34061 status.go:328] multinode-20210812171556-27878-m02 host status = "Stopped" (err=<nil>)
	I0812 17:28:02.227303   34061 status.go:341] host is not running, skipping remaining checks
	I0812 17:28:02.227307   34061 status.go:255] multinode-20210812171556-27878-m02 status: &{Name:multinode-20210812171556-27878-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (35.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (149.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20210812171556-27878 --wait=true -v=8 --alsologtostderr --driver=docker 
E0812 17:28:36.914947   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20210812171556-27878 --wait=true -v=8 --alsologtostderr --driver=docker : (2m26.288059339s)
multinode_test.go:341: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr
multinode_test.go:341: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20210812171556-27878 status --alsologtostderr: (1.223156228s)
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:355: (dbg) Done: kubectl get nodes: (2.248040993s)
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (149.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (94.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20210812171556-27878
multinode_test.go:433: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20210812171556-27878-m02 --driver=docker 
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20210812171556-27878-m02 --driver=docker : exit status 14 (304.891862ms)

                                                
                                                
-- stdout --
	* [multinode-20210812171556-27878-m02] minikube v1.22.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210812171556-27878-m02' is duplicated with machine name 'multinode-20210812171556-27878-m02' in profile 'multinode-20210812171556-27878'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20210812171556-27878-m03 --driver=docker 
multinode_test.go:441: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20210812171556-27878-m03 --driver=docker : (1m17.345106062s)
multinode_test.go:448: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20210812171556-27878
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20210812171556-27878: exit status 1 (598.147931ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210812171556-27878
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210812171556-27878-m03 already exists in multinode-20210812171556-27878-m03 profile
	* 

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20210812171556-27878-m03
multinode_test.go:453: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20210812171556-27878-m03: (16.00440665s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (94.29s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestPreload (201.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20210812174109-27878 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0812 17:42:13.869942   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20210812174109-27878 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (2m21.887015054s)
preload_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20210812174109-27878 -- docker pull busybox
preload_test.go:61: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-20210812174109-27878 -- docker pull busybox: (2.73611594s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20210812174109-27878 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20210812174109-27878 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (43.420275495s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20210812174109-27878 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20210812174109-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20210812174109-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20210812174109-27878: (13.133395692s)
--- PASS: TestPreload (201.87s)

                                                
                                    
x
+
TestSkaffold (127.49s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe534963395 version
skaffold_test.go:61: skaffold version: v1.30.0
skaffold_test.go:64: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20210812174633-27878 --memory=2600 --driver=docker 
E0812 17:47:13.877760   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
skaffold_test.go:64: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20210812174633-27878 --memory=2600 --driver=docker : (1m17.818511669s)
skaffold_test.go:84: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:108: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe534963395 run --minikube-profile skaffold-20210812174633-27878 --kube-context skaffold-20210812174633-27878 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe534963395 run --minikube-profile skaffold-20210812174633-27878 --kube-context skaffold-20210812174633-27878 --status-check=true --port-forward=false --interactive=false: (23.838318996s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-b8d4bf7d5-dlzgz" [40472592-2279-4c89-bf6c-fa476057a4b0] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.01700972s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-6dc7d7c64f-n6jcl" [80a84d35-9eff-4524-93ef-e87ad9f97364] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.016620856s
helpers_test.go:176: Cleaning up "skaffold-20210812174633-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20210812174633-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20210812174633-27878: (13.34262184s)
--- PASS: TestSkaffold (127.49s)

                                                
                                    
x
+
TestInsufficientStorage (60.77s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20210812174841-27878 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20210812174841-27878 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (47.105116409s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210812174841-27878] minikube v1.22.0 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"ba6de8f6-abbb-45df-8456-a5c86585e63c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"29c4cfb6-fbb1-497a-95fb-ff182f138adf","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig"},"datacontenttype":"application/json","id":"1ac7ff17-d3b9-412e-b2c3-001dfbc6d05e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"},"datacontenttype":"application/json","id":"55536448-0ac7-4101-a607-ac0e618656d5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube"},"datacontenttype":"application/json","id":"3e6cc4a6-b81a-447c-8fe2-6afb8089b8c7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"99f017f5-e0a5-469e-a8b7-953cad37e3a9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"bb70e164-810b-40a5-96ac-e4ec799c2a47","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210812174841-27878 in cluster insufficient-storage-20210812174841-27878","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"2ef450b3-4fa0-4bc2-96e6-86c592453bdf","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"f1c854c9-d207-43fd-a3b2-b931b5d16e33","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"693cbdbc-9d8a-4bdc-b5d5-137f954aeb17","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"bffa2db0-aad9-427c-ac58-af30d45ec2aa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20210812174841-27878 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20210812174841-27878 --output=json --layout=cluster: exit status 7 (617.68558ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210812174841-27878","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210812174841-27878","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 17:49:29.072887   36563 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210812174841-27878" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20210812174841-27878 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20210812174841-27878 --output=json --layout=cluster: exit status 7 (613.716272ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210812174841-27878","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210812174841-27878","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 17:49:29.687081   36580 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210812174841-27878" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	E0812 17:49:29.697863   36580 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/insufficient-storage-20210812174841-27878/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210812174841-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20210812174841-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20210812174841-27878: (12.43232802s)
--- PASS: TestInsufficientStorage (60.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (255.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.140371111.exe start -p running-upgrade-20210812175457-27878 --memory=2200 --vm-driver=docker 
E0812 17:56:01.873079   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.140371111.exe start -p running-upgrade-20210812175457-27878 --memory=2200 --vm-driver=docker : (1m45.405512449s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-20210812175457-27878 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0812 17:57:13.848936   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-20210812175457-27878 --memory=2200 --alsologtostderr -v=1 --driver=docker : (2m21.029430259s)
helpers_test.go:176: Cleaning up "running-upgrade-20210812175457-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20210812175457-27878
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20210812175457-27878: (7.717818058s)
--- PASS: TestRunningBinaryUpgrade (255.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (176.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20210812175201-27878 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker 
E0812 17:52:13.849506   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20210812175201-27878 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker : (1m3.842373587s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20210812175201-27878
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20210812175201-27878: (12.600311404s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20210812175201-27878 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20210812175201-27878 status --format={{.Host}}: exit status 7 (160.845832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20210812175201-27878 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker 
E0812 17:53:17.977376   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:17.982638   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:17.993181   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:18.017647   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:18.067647   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:18.148559   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:18.317706   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:18.638294   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:19.280037   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:20.561051   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:23.126254   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:28.251394   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:38.494807   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 17:53:58.976242   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
version_upgrade_test.go:245: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20210812175201-27878 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker : (1m0.141243307s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210812175201-27878 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20210812175201-27878 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker 
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20210812175201-27878 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker : exit status 106 (403.26299ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210812175201-27878] minikube v1.22.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210812175201-27878
	    minikube start -p kubernetes-upgrade-20210812175201-27878 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210812175201-278782 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210812175201-27878 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20210812175201-27878 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20210812175201-27878 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker : (21.598354687s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210812175201-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20210812175201-27878

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20210812175201-27878: (17.992946249s)
--- PASS: TestKubernetesUpgrade (176.85s)

                                                
                                    
x
+
TestMissingContainerUpgrade (188.11s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.943390022.exe start -p missing-upgrade-20210812175145-27878 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.943390022.exe start -p missing-upgrade-20210812175145-27878 --memory=2200 --driver=docker : (1m5.748168403s)
version_upgrade_test.go:320: (dbg) Run:  docker stop missing-upgrade-20210812175145-27878
version_upgrade_test.go:320: (dbg) Done: docker stop missing-upgrade-20210812175145-27878: (6.094066443s)
version_upgrade_test.go:325: (dbg) Run:  docker rm missing-upgrade-20210812175145-27878
version_upgrade_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-20210812175145-27878 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:331: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-20210812175145-27878 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m40.377981509s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210812175145-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20210812175145-27878
E0812 17:54:39.944850   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20210812175145-27878: (14.952274317s)
--- PASS: TestMissingContainerUpgrade (188.11s)

                                                
                                    
x
+
TestPause/serial/Start (107.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20210812174942-27878 --memory=2048 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20210812174942-27878 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m47.805145102s)
--- PASS: TestPause/serial/Start (107.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20210812174942-27878 --alsologtostderr -v=1 --driver=docker 
pause_test.go:89: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20210812174942-27878 --alsologtostderr -v=1 --driver=docker : (7.435856204s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.45s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20210812174942-27878 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20210812174942-27878 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20210812174942-27878 --output=json --layout=cluster: exit status 2 (658.456918ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210812174942-27878","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210812174942-27878","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.66s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20210812174942-27878 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20210812174942-27878 --alsologtostderr -v=5
pause_test.go:107: (dbg) Done: out/minikube-darwin-amd64 pause -p pause-20210812174942-27878 --alsologtostderr -v=5: (1.088771951s)
--- PASS: TestPause/serial/PauseAgain (1.09s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (15.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20210812174942-27878 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-20210812174942-27878 --alsologtostderr -v=5: (15.503884798s)
--- PASS: TestPause/serial/DeletePaused (15.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.87s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:139: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (3.612236171s)
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210812174942-27878
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210812174942-27878: exit status 1 (132.039171ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210812174942-27878

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (3.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20210812175453-27878
version_upgrade_test.go:208: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20210812175453-27878: (2.771643355s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.77s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (12.05s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (12.05s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.22.0 on darwin
- MINIKUBE_LOCATION=12230
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current113945872
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current113945872/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current113945872/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current113945872/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20210812180047-27878 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20210812180047-27878 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0: (2m12.929132416s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (149.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20210812180131-27878 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.0-rc.0
E0812 18:01:56.955358   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 18:02:13.850009   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20210812180131-27878 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.0-rc.0: (2m29.898078434s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (149.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210812180047-27878 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) Done: kubectl --context old-k8s-version-20210812180047-27878 create -f testdata/busybox.yaml: (1.975688719s)
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [3564b940-fbd2-11eb-bdda-024286c78cb8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [3564b940-fbd2-11eb-bdda-024286c78cb8] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.022314556s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210812180047-27878 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20210812180047-27878 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210812180047-27878 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20210812180047-27878 --alsologtostderr -v=3
E0812 18:03:17.983737   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
start_stop_delete_test.go:201: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20210812180047-27878 --alsologtostderr -v=3: (12.654529498s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20210812180047-27878 -n old-k8s-version-20210812180047-27878
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20210812180047-27878 -n old-k8s-version-20210812180047-27878: exit status 7 (161.038758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20210812180047-27878 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (444.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20210812180047-27878 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20210812180047-27878 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0: (7m23.795876731s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20210812180047-27878 -n old-k8s-version-20210812180047-27878
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (444.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210812180131-27878 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) Done: kubectl --context no-preload-20210812180131-27878 create -f testdata/busybox.yaml: (1.779553013s)
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [0aebebae-de1d-474c-851e-b06c24ec88d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [0aebebae-de1d-474c-851e-b06c24ec88d3] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.030890926s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210812180131-27878 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20210812180131-27878 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210812180131-27878 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (17.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20210812180131-27878 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20210812180131-27878 --alsologtostderr -v=3: (17.092656524s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (17.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20210812180131-27878 -n no-preload-20210812180131-27878
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20210812180131-27878 -n no-preload-20210812180131-27878: exit status 7 (164.885322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20210812180131-27878 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (375.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20210812180131-27878 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.0-rc.0
E0812 18:07:13.836227   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 18:08:17.979476   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 18:09:41.082322   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20210812180131-27878 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.0-rc.0: (6m14.334377739s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20210812180131-27878 -n no-preload-20210812180131-27878
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (375.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-psng6" [4c8c63ff-eabb-4b8f-9a92-3749edb62ba6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-psng6" [4c8c63ff-eabb-4b8f-9a92-3749edb62ba6] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.020544826s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-q45ct" [1f57b007-fbd3-11eb-b835-024245c5c9ab] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019992253s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-psng6" [4c8c63ff-eabb-4b8f-9a92-3749edb62ba6] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006489322s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210812180131-27878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:264: (dbg) Done: kubectl --context no-preload-20210812180131-27878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.961577241s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-q45ct" [1f57b007-fbd3-11eb-b835-024245c5c9ab] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008388843s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210812180047-27878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:264: (dbg) Done: kubectl --context old-k8s-version-20210812180047-27878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (2.165525017s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20210812180131-27878 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Done: out/minikube-darwin-amd64 ssh -p no-preload-20210812180131-27878 "sudo crictl images -o json": (1.04324806s)
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20210812180131-27878 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-darwin-amd64 pause -p no-preload-20210812180131-27878 --alsologtostderr -v=1: (1.287857398s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20210812180131-27878 -n no-preload-20210812180131-27878

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20210812180131-27878 -n no-preload-20210812180131-27878: exit status 2 (753.945277ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20210812180131-27878 -n no-preload-20210812180131-27878
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20210812180131-27878 -n no-preload-20210812180131-27878: exit status 2 (759.355161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20210812180131-27878 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-darwin-amd64 unpause -p no-preload-20210812180131-27878 --alsologtostderr -v=1: (1.094284331s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20210812180131-27878 -n no-preload-20210812180131-27878

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20210812180131-27878 -n no-preload-20210812180131-27878
--- PASS: TestStartStop/group/no-preload/serial/Pause (5.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-20210812180047-27878 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-20210812180047-27878 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-darwin-amd64 pause -p old-k8s-version-20210812180047-27878 --alsologtostderr -v=1: (1.011034175s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20210812180047-27878 -n old-k8s-version-20210812180047-27878
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20210812180047-27878 -n old-k8s-version-20210812180047-27878: exit status 2 (773.214117ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210812180047-27878 -n old-k8s-version-20210812180047-27878

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210812180047-27878 -n old-k8s-version-20210812180047-27878: exit status 2 (847.585167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-20210812180047-27878 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-darwin-amd64 unpause -p old-k8s-version-20210812180047-27878 --alsologtostderr -v=1: (1.036458868s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20210812180047-27878 -n old-k8s-version-20210812180047-27878
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210812180047-27878 -n old-k8s-version-20210812180047-27878
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (108.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20210812181121-27878 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20210812181121-27878 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.21.3: (1m48.870168766s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (108.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (105.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20210812181125-27878 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.21.3
E0812 18:12:13.846511   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 18:13:02.232432   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:02.238231   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:02.250214   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:02.272573   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:02.318480   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:02.404942   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:02.565109   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:02.885745   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:03.526118   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:04.806421   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:13:07.368155   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20210812181125-27878 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.21.3: (1m45.769520399s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (105.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210812181121-27878 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Done: kubectl --context embed-certs-20210812181121-27878 create -f testdata/busybox.yaml: (3.639809738s)
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [346ac88d-72da-4451-b5f1-9f43c22cef26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:343: "busybox" [346ac88d-72da-4451-b5f1-9f43c22cef26] Running
E0812 18:13:17.976280   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.05403704s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210812181121-27878 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210812181125-27878 create -f testdata/busybox.yaml
E0812 18:13:12.495127   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Done: kubectl --context default-k8s-different-port-20210812181125-27878 create -f testdata/busybox.yaml: (3.682738441s)
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [925bdf61-f79e-4e93-8761-58c9b507cb4d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:343: "busybox" [925bdf61-f79e-4e93-8761-58c9b507cb4d] Running
E0812 18:13:22.735809   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.018035241s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210812181125-27878 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20210812181121-27878 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210812181121-27878 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20210812181121-27878 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20210812181121-27878 --alsologtostderr -v=3: (17.40866573s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20210812181125-27878 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210812181125-27878 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (17.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20210812181125-27878 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20210812181125-27878 --alsologtostderr -v=3: (17.932460988s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (17.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20210812181121-27878 -n embed-certs-20210812181121-27878
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20210812181121-27878 -n embed-certs-20210812181121-27878: exit status 7 (158.419377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20210812181121-27878 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (395.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20210812181121-27878 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20210812181121-27878 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.21.3: (6m34.996853701s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20210812181121-27878 -n embed-certs-20210812181121-27878
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (395.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20210812181125-27878 -n default-k8s-different-port-20210812181125-27878
E0812 18:13:43.216472   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20210812181125-27878 -n default-k8s-different-port-20210812181125-27878: exit status 7 (158.183516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20210812181125-27878 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (356.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20210812181125-27878 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.21.3
E0812 18:14:03.077547   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:03.082721   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:03.093117   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:03.121071   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:03.161229   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:03.241383   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:03.406256   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:03.733891   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:04.374111   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:05.654289   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:08.220969   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:13.341186   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:23.590499   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:14:24.183613   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:14:44.076225   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:15:25.037695   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:15:46.106733   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:16:46.966515   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:17:13.842627   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 18:18:02.236076   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:18:17.980993   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 18:18:29.949181   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:18:36.953135   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 18:19:03.050744   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:19:30.776521   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20210812181125-27878 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.21.3: (5m55.343290097s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20210812181125-27878 -n default-k8s-different-port-20210812181125-27878
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (356.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-2xhhv" [dd67dd3a-ab2d-4c19-8155-b54594f0266b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-2xhhv" [dd67dd3a-ab2d-4c19-8155-b54594f0266b] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.017317793s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (7.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-2xhhv" [dd67dd3a-ab2d-4c19-8155-b54594f0266b] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006747778s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210812181125-27878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:264: (dbg) Done: kubectl --context default-k8s-different-port-20210812181125-27878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (2.535198887s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (7.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20210812181125-27878 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20210812181125-27878 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-darwin-amd64 pause -p default-k8s-different-port-20210812181125-27878 --alsologtostderr -v=1: (1.749585328s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210812181125-27878 -n default-k8s-different-port-20210812181125-27878
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210812181125-27878 -n default-k8s-different-port-20210812181125-27878: exit status 2 (729.623859ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210812181125-27878 -n default-k8s-different-port-20210812181125-27878
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210812181125-27878 -n default-k8s-different-port-20210812181125-27878: exit status 2 (946.690195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20210812181125-27878 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20210812181125-27878 --alsologtostderr -v=1: (1.265819958s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210812181125-27878 -n default-k8s-different-port-20210812181125-27878
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210812181125-27878 -n default-k8s-different-port-20210812181125-27878
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (6.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (76.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20210812182009-27878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20210812182009-27878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.0-rc.0: (1m16.659451049s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (76.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-m2dvs" [22248876-83cd-4898-9303-68a6d0468165] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017920706s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (7.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-m2dvs" [22248876-83cd-4898-9303-68a6d0468165] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006508875s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210812181121-27878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:264: (dbg) Done: kubectl --context embed-certs-20210812181121-27878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (2.583029423s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (7.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20210812181121-27878 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20210812181121-27878 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20210812181121-27878 -n embed-certs-20210812181121-27878
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20210812181121-27878 -n embed-certs-20210812181121-27878: exit status 2 (700.678168ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20210812181121-27878 -n embed-certs-20210812181121-27878
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20210812181121-27878 -n embed-certs-20210812181121-27878: exit status 2 (700.353065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20210812181121-27878 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20210812181121-27878 -n embed-certs-20210812181121-27878
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20210812181121-27878 -n embed-certs-20210812181121-27878
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (118.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (1m58.297187482s)
--- PASS: TestNetworkPlugins/group/auto/Start (118.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20210812182009-27878 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (16.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20210812182009-27878 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20210812182009-27878 --alsologtostderr -v=3: (16.460033864s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (16.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20210812182009-27878 -n newest-cni-20210812182009-27878
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20210812182009-27878 -n newest-cni-20210812182009-27878: exit status 7 (156.331475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20210812182009-27878 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (42.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20210812182009-27878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.0-rc.0
E0812 18:22:13.803740   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20210812182009-27878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.0-rc.0: (41.683394893s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20210812182009-27878 -n newest-cni-20210812182009-27878
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (42.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20210812182009-27878 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20210812182009-27878 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-darwin-amd64 pause -p newest-cni-20210812182009-27878 --alsologtostderr -v=1: (2.712382974s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20210812182009-27878 -n newest-cni-20210812182009-27878
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20210812182009-27878 -n newest-cni-20210812182009-27878: exit status 2 (661.140133ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20210812182009-27878 -n newest-cni-20210812182009-27878
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20210812182009-27878 -n newest-cni-20210812182009-27878: exit status 2 (654.785978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20210812182009-27878 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20210812182009-27878 -n newest-cni-20210812182009-27878
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20210812182009-27878 -n newest-cni-20210812182009-27878
--- PASS: TestStartStop/group/newest-cni/serial/Pause (6.45s)
E0812 18:39:38.267774   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20210812175913-27878 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context auto-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml: (2.500162064s)
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-qkpzw" [70ea42a7-4907-45d0-85b9-c2ab608ecfd8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-qkpzw" [70ea42a7-4907-45d0-85b9-c2ab608ecfd8] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.012115252s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (103.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p false-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m43.795011118s)
--- PASS: TestNetworkPlugins/group/false/Start (103.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210812175913-27878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210812175913-27878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:231: (dbg) Non-zero exit: kubectl --context auto-20210812175913-27878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.18949601s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (158.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 
E0812 18:23:15.090849   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:15.096628   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:15.107252   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:15.129590   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:15.170239   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:15.250372   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:15.411809   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:15.740779   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:16.386979   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:17.667131   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:17.943511   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 18:23:20.232413   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:25.357364   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:35.607362   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:23:56.091665   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:24:03.041640   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (2m38.431234534s)
--- PASS: TestNetworkPlugins/group/cilium/Start (158.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20210812175913-27878 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context false-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context false-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml: (2.488979512s)
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-4wsmt" [c0a24227-5ac1-42e5-87cd-345ab061e0c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-4wsmt" [c0a24227-5ac1-42e5-87cd-345ab061e0c0] Running
E0812 18:24:37.057848   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.009229009s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:162: (dbg) Run:  kubectl --context false-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:181: (dbg) Run:  kubectl --context false-20210812175913-27878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:231: (dbg) Run:  kubectl --context false-20210812175913-27878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:231: (dbg) Non-zero exit: kubectl --context false-20210812175913-27878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.139086978s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-m6ttj" [781e9699-0da6-42d7-9461-4a4a4d568218] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014971086s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20210812175913-27878 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context cilium-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml: (2.478749794s)
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-5zdd5" [60c766ee-c71e-4d30-9adc-38db83e16e2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-5zdd5" [60c766ee-c71e-4d30-9adc-38db83e16e2b] Running
E0812 18:25:58.978661   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.009524076s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210812175913-27878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210812175913-27878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210812175913-27878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (129.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 
E0812 18:26:21.054720   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 18:27:13.804223   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
E0812 18:27:43.905507   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:43.912374   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:43.922690   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:43.942975   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:43.991992   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:44.072379   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:44.241724   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:44.562373   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:45.204589   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:46.492190   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:49.052435   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:27:54.173511   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:28:02.204604   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:28:04.416695   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:28:15.091233   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210812181125-27878/client.crt: no such file or directory
E0812 18:28:17.939728   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/skaffold-20210812174633-27878/client.crt: no such file or directory
E0812 18:28:24.904803   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p custom-weave-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : (2m9.202490708s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (129.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-weave-20210812175913-27878 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (12.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context custom-weave-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml: (2.87669376s)
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-wlfrv" [7f9bafe5-77e1-42db-874c-29a2dc00a0c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-wlfrv" [7f9bafe5-77e1-42db-874c-29a2dc00a0c9] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 10.006825162s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (12.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (104.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0812 18:29:03.043513   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:29:05.873086   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:25.270320   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210812180047-27878/client.crt: no such file or directory
E0812 18:29:32.526227   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:32.531640   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:32.541908   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:32.562576   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:32.603595   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:32.691961   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:32.856722   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:33.179771   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:33.820024   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:35.117756   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:37.678214   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:42.798370   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:29:53.038558   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:13.522689   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:26.145623   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210812180131-27878/client.crt: no such file or directory
E0812 18:30:27.795319   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (1m44.362240068s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (104.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20210812175913-27878 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml
E0812 18:30:43.205263   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:43.211486   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:43.222428   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:43.246338   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:43.294657   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:43.374829   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:43.543532   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:43.871956   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
net_test.go:131: (dbg) Done: kubectl --context enable-default-cni-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml: (2.656200215s)
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-hc69f" [a57d598d-af6e-46ca-86a3-cd4e5489b84e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0812 18:30:44.520400   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:45.800952   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:48.370410   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-hc69f" [a57d598d-af6e-46ca-86a3-cd4e5489b84e] Running
E0812 18:30:53.494894   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:30:54.489669   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/false-20210812175913-27878/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.011218169s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
E0812 18:36:11.027219   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210812175913-27878/client.crt: no such file or directory
E0812 18:36:13.883158   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210812175913-27878/client.crt: no such file or directory
E0812 18:37:13.879254   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812170347-27878/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20210812175913-27878 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (1m28.696233006s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20210812175913-27878 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (17.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context bridge-20210812175913-27878 replace --force -f testdata/netcat-deployment.yaml: (2.563032044s)
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-477rc" [2cdce081-cb36-42e4-9f2a-a77d21bcc863] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0812 18:37:43.981602   27878 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--12230-27126-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210812175913-27878/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-477rc" [2cdce081-cb36-42e4-9f2a-a77d21bcc863] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.007193854s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (17.59s)

                                                
                                    

Test skip (13/247)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:42: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210812170347-27878 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210812170347-27878 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-rxfjf" [0b1fb111-9582-4351-99cb-76271cc1364b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-rxfjf" [0b1fb111-9582-4351-99cb-76271cc1364b] Running
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.026640187s
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20210812170347-27878 service list
functional_test.go:1381: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (13.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210812181124-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20210812181124-27878
--- SKIP: TestStartStop/group/disable-driver-mounts (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210812175913-27878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20210812175913-27878
--- SKIP: TestNetworkPlugins/group/flannel (0.71s)

                                                
                                    
Copied to clipboard