Test Report: Docker_macOS 13639

                    
                      60328d4d40a11ac7c18c6243f597bcfbb3050148:2022-05-11:23896
                    
                

Test fail (6/280)

Order failed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0.11
273 TestNetworkPlugins/group/calico/Start 552.28
286 TestNetworkPlugins/group/enable-default-cni/DNS 353.31
287 TestNetworkPlugins/group/kindnet/Start 330.68
291 TestNetworkPlugins/group/bridge/DNS 335.72
292 TestNetworkPlugins/group/kubenet/Start 342.16
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (552.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0511 17:05:10.575459   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 80 (9m12.262235754s)

                                                
                                                
-- stdout --
	* [calico-20220511164516-84527] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13639
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node calico-20220511164516-84527 in cluster calico-20220511164516-84527
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 17:04:05.167280    2053 out.go:296] Setting OutFile to fd 1 ...
	I0511 17:04:05.167485    2053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 17:04:05.167490    2053 out.go:309] Setting ErrFile to fd 2...
	I0511 17:04:05.167494    2053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 17:04:05.167596    2053 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 17:04:05.167946    2053 out.go:303] Setting JSON to false
	I0511 17:04:05.184848    2053 start.go:115] hostinfo: {"hostname":"37310.local","uptime":29020,"bootTime":1652284825,"procs":370,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0511 17:04:05.184948    2053 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 17:04:05.211661    2053 out.go:177] * [calico-20220511164516-84527] minikube v1.25.2 on Darwin 11.2.3
	I0511 17:04:05.258706    2053 notify.go:193] Checking for updates...
	I0511 17:04:05.284557    2053 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 17:04:05.331400    2053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 17:04:05.378485    2053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0511 17:04:05.425917    2053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 17:04:05.472297    2053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 17:04:05.499182    2053 config.go:178] Loaded profile config "cilium-20220511164516-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 17:04:05.499280    2053 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 17:04:05.595958    2053 docker.go:137] docker version: linux-20.10.6
	I0511 17:04:05.596096    2053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 17:04:05.775079    2053 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-12 00:04:05.705832242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 17:04:05.800994    2053 out.go:177] * Using the docker driver based on user configuration
	I0511 17:04:05.826693    2053 start.go:284] selected driver: docker
	I0511 17:04:05.826730    2053 start.go:801] validating driver "docker" against <nil>
	I0511 17:04:05.826762    2053 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 17:04:05.829800    2053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 17:04:06.011067    2053 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-12 00:04:05.941386676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 17:04:06.011191    2053 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0511 17:04:06.011360    2053 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0511 17:04:06.037968    2053 out.go:177] * Using Docker Desktop driver with the root privilege
	I0511 17:04:06.063719    2053 cni.go:95] Creating CNI manager for "calico"
	I0511 17:04:06.063747    2053 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0511 17:04:06.063798    2053 start_flags.go:306] config:
	{Name:calico-20220511164516-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220511164516-84527 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 17:04:06.089626    2053 out.go:177] * Starting control plane node calico-20220511164516-84527 in cluster calico-20220511164516-84527
	I0511 17:04:06.136792    2053 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 17:04:06.162471    2053 out.go:177] * Pulling base image ...
	I0511 17:04:06.209625    2053 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 17:04:06.209637    2053 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 17:04:06.209672    2053 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 17:04:06.209680    2053 cache.go:57] Caching tarball of preloaded images
	I0511 17:04:06.209797    2053 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0511 17:04:06.209807    2053 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0511 17:04:06.210392    2053 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/config.json ...
	I0511 17:04:06.210476    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/config.json: {Name:mkc47d19a45fcc0cb671f0c8a2ab12aecd3f7fff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:04:06.328041    2053 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 17:04:06.328062    2053 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0511 17:04:06.328074    2053 cache.go:206] Successfully downloaded all kic artifacts
	I0511 17:04:06.328124    2053 start.go:352] acquiring machines lock for calico-20220511164516-84527: {Name:mk42151ec143b5dd01fcc2b0327f0a28056599b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 17:04:06.329189    2053 start.go:356] acquired machines lock for "calico-20220511164516-84527" in 1.051648ms
	I0511 17:04:06.329223    2053 start.go:91] Provisioning new machine with config: &{Name:calico-20220511164516-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220511164516-84527 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 17:04:06.329298    2053 start.go:131] createHost starting for "" (driver="docker")
	I0511 17:04:06.376538    2053 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0511 17:04:06.376940    2053 start.go:165] libmachine.API.Create for "calico-20220511164516-84527" (driver="docker")
	I0511 17:04:06.376982    2053 client.go:168] LocalClient.Create starting
	I0511 17:04:06.377094    2053 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem
	I0511 17:04:06.377136    2053 main.go:134] libmachine: Decoding PEM data...
	I0511 17:04:06.377158    2053 main.go:134] libmachine: Parsing certificate...
	I0511 17:04:06.377257    2053 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem
	I0511 17:04:06.377292    2053 main.go:134] libmachine: Decoding PEM data...
	I0511 17:04:06.377304    2053 main.go:134] libmachine: Parsing certificate...
	I0511 17:04:06.377807    2053 cli_runner.go:164] Run: docker network inspect calico-20220511164516-84527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0511 17:04:06.492773    2053 cli_runner.go:211] docker network inspect calico-20220511164516-84527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0511 17:04:06.492872    2053 network_create.go:272] running [docker network inspect calico-20220511164516-84527] to gather additional debugging logs...
	I0511 17:04:06.492890    2053 cli_runner.go:164] Run: docker network inspect calico-20220511164516-84527
	W0511 17:04:06.604752    2053 cli_runner.go:211] docker network inspect calico-20220511164516-84527 returned with exit code 1
	I0511 17:04:06.604778    2053 network_create.go:275] error running [docker network inspect calico-20220511164516-84527]: docker network inspect calico-20220511164516-84527: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220511164516-84527
	I0511 17:04:06.604798    2053 network_create.go:277] output of [docker network inspect calico-20220511164516-84527]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220511164516-84527
	
	** /stderr **
	I0511 17:04:06.604884    2053 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0511 17:04:06.720297    2053 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000136dc8] misses:0}
	I0511 17:04:06.720354    2053 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 17:04:06.720377    2053 network_create.go:115] attempt to create docker network calico-20220511164516-84527 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0511 17:04:06.720459    2053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220511164516-84527
	W0511 17:04:06.833825    2053 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220511164516-84527 returned with exit code 1
	W0511 17:04:06.833865    2053 network_create.go:107] failed to create docker network calico-20220511164516-84527 192.168.49.0/24, will retry: subnet is taken
	I0511 17:04:06.834084    2053 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000136dc8] amended:false}} dirty:map[] misses:0}
	I0511 17:04:06.834104    2053 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 17:04:06.834298    2053 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000136dc8] amended:true}} dirty:map[192.168.49.0:0xc000136dc8 192.168.58.0:0xc00074c7b8] misses:0}
	I0511 17:04:06.834314    2053 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 17:04:06.834320    2053 network_create.go:115] attempt to create docker network calico-20220511164516-84527 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0511 17:04:06.834399    2053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220511164516-84527
	I0511 17:04:07.857363    2053 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220511164516-84527: (1.023384618s)
	I0511 17:04:07.857386    2053 network_create.go:99] docker network calico-20220511164516-84527 192.168.58.0/24 created
	I0511 17:04:07.857401    2053 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20220511164516-84527" container
	I0511 17:04:07.857518    2053 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0511 17:04:07.988989    2053 cli_runner.go:164] Run: docker volume create calico-20220511164516-84527 --label name.minikube.sigs.k8s.io=calico-20220511164516-84527 --label created_by.minikube.sigs.k8s.io=true
	I0511 17:04:08.102532    2053 oci.go:103] Successfully created a docker volume calico-20220511164516-84527
	I0511 17:04:08.102647    2053 cli_runner.go:164] Run: docker run --rm --name calico-20220511164516-84527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220511164516-84527 --entrypoint /usr/bin/test -v calico-20220511164516-84527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0511 17:04:08.633158    2053 oci.go:107] Successfully prepared a docker volume calico-20220511164516-84527
	I0511 17:04:08.633193    2053 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 17:04:08.633207    2053 kic.go:179] Starting extracting preloaded images to volume ...
	I0511 17:04:08.633308    2053 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220511164516-84527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0511 17:04:13.175715    2053 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220511164516-84527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (4.543963053s)
	I0511 17:04:13.175739    2053 kic.go:188] duration metric: took 4.544144 seconds to extract preloaded images to volume
	I0511 17:04:13.175849    2053 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0511 17:04:13.373168    2053 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220511164516-84527 --name calico-20220511164516-84527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220511164516-84527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220511164516-84527 --network calico-20220511164516-84527 --ip 192.168.58.2 --volume calico-20220511164516-84527:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0511 17:04:19.728515    2053 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220511164516-84527 --name calico-20220511164516-84527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220511164516-84527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220511164516-84527 --network calico-20220511164516-84527 --ip 192.168.58.2 --volume calico-20220511164516-84527:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (6.356796878s)
	I0511 17:04:19.728649    2053 cli_runner.go:164] Run: docker container inspect calico-20220511164516-84527 --format={{.State.Running}}
	I0511 17:04:19.864319    2053 cli_runner.go:164] Run: docker container inspect calico-20220511164516-84527 --format={{.State.Status}}
	I0511 17:04:19.996367    2053 cli_runner.go:164] Run: docker exec calico-20220511164516-84527 stat /var/lib/dpkg/alternatives/iptables
	I0511 17:04:20.204585    2053 oci.go:247] the created container "calico-20220511164516-84527" has a running status.
	I0511 17:04:20.204617    2053 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511164516-84527/id_rsa...
	I0511 17:04:20.627647    2053 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511164516-84527/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0511 17:04:20.816504    2053 cli_runner.go:164] Run: docker container inspect calico-20220511164516-84527 --format={{.State.Status}}
	I0511 17:04:20.948528    2053 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0511 17:04:20.948550    2053 kic_runner.go:114] Args: [docker exec --privileged calico-20220511164516-84527 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0511 17:04:21.155934    2053 cli_runner.go:164] Run: docker container inspect calico-20220511164516-84527 --format={{.State.Status}}
	I0511 17:04:21.283427    2053 machine.go:88] provisioning docker machine ...
	I0511 17:04:21.283489    2053 ubuntu.go:169] provisioning hostname "calico-20220511164516-84527"
	I0511 17:04:21.283606    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:21.408812    2053 main.go:134] libmachine: Using SSH client type: native
	I0511 17:04:21.409001    2053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 65340 <nil> <nil>}
	I0511 17:04:21.409018    2053 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220511164516-84527 && echo "calico-20220511164516-84527" | sudo tee /etc/hostname
	I0511 17:04:21.410405    2053 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0511 17:04:24.541886    2053 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220511164516-84527
	
	I0511 17:04:24.541998    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:24.669649    2053 main.go:134] libmachine: Using SSH client type: native
	I0511 17:04:24.669850    2053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 65340 <nil> <nil>}
	I0511 17:04:24.669876    2053 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220511164516-84527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220511164516-84527/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220511164516-84527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0511 17:04:24.784742    2053 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0511 17:04:24.784774    2053 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube}
	I0511 17:04:24.784814    2053 ubuntu.go:177] setting up certificates
	I0511 17:04:24.784826    2053 provision.go:83] configureAuth start
	I0511 17:04:24.784930    2053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220511164516-84527
	I0511 17:04:24.915546    2053 provision.go:138] copyHostCerts
	I0511 17:04:24.915653    2053 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem, removing ...
	I0511 17:04:24.915665    2053 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem
	I0511 17:04:24.915764    2053 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem (1082 bytes)
	I0511 17:04:24.915960    2053 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem, removing ...
	I0511 17:04:24.915973    2053 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem
	I0511 17:04:24.916035    2053 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem (1123 bytes)
	I0511 17:04:24.916190    2053 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem, removing ...
	I0511 17:04:24.916196    2053 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem
	I0511 17:04:24.916251    2053 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem (1679 bytes)
	I0511 17:04:24.916376    2053 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem org=jenkins.calico-20220511164516-84527 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220511164516-84527]
	I0511 17:04:25.311266    2053 provision.go:172] copyRemoteCerts
	I0511 17:04:25.311330    2053 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0511 17:04:25.311385    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:25.433964    2053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65340 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511164516-84527/id_rsa Username:docker}
	I0511 17:04:25.521880    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0511 17:04:25.540171    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0511 17:04:25.561646    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0511 17:04:25.587498    2053 provision.go:86] duration metric: configureAuth took 802.759315ms
	I0511 17:04:25.587525    2053 ubuntu.go:193] setting minikube options for container-runtime
	I0511 17:04:25.587806    2053 config.go:178] Loaded profile config "calico-20220511164516-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 17:04:25.587959    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:25.714833    2053 main.go:134] libmachine: Using SSH client type: native
	I0511 17:04:25.714989    2053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 65340 <nil> <nil>}
	I0511 17:04:25.715003    2053 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0511 17:04:25.831547    2053 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0511 17:04:25.831562    2053 ubuntu.go:71] root file system type: overlay
	I0511 17:04:25.831724    2053 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0511 17:04:25.831823    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:25.957662    2053 main.go:134] libmachine: Using SSH client type: native
	I0511 17:04:25.957818    2053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 65340 <nil> <nil>}
	I0511 17:04:25.957877    2053 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0511 17:04:26.088278    2053 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0511 17:04:26.088411    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:26.209720    2053 main.go:134] libmachine: Using SSH client type: native
	I0511 17:04:26.209887    2053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 65340 <nil> <nil>}
	I0511 17:04:26.209900    2053 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0511 17:04:45.343552    2053 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 00:04:26.108946090 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0511 17:04:45.343575    2053 machine.go:91] provisioned docker machine in 24.062139566s
	I0511 17:04:45.343584    2053 client.go:171] LocalClient.Create took 38.973124969s
	I0511 17:04:45.343601    2053 start.go:173] duration metric: libmachine.API.Create for "calico-20220511164516-84527" took 38.973189911s
	I0511 17:04:45.343611    2053 start.go:306] post-start starting for "calico-20220511164516-84527" (driver="docker")
	I0511 17:04:45.343614    2053 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0511 17:04:45.343701    2053 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0511 17:04:45.343775    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:45.470569    2053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65340 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511164516-84527/id_rsa Username:docker}
	I0511 17:04:45.557986    2053 ssh_runner.go:195] Run: cat /etc/os-release
	I0511 17:04:45.561982    2053 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0511 17:04:45.561996    2053 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0511 17:04:45.562009    2053 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0511 17:04:45.562013    2053 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0511 17:04:45.562027    2053 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/addons for local assets ...
	I0511 17:04:45.562125    2053 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files for local assets ...
	I0511 17:04:45.562276    2053 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem -> 845272.pem in /etc/ssl/certs
	I0511 17:04:45.562442    2053 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0511 17:04:45.570263    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem --> /etc/ssl/certs/845272.pem (1708 bytes)
	I0511 17:04:45.590167    2053 start.go:309] post-start completed in 246.553772ms
	I0511 17:04:45.590834    2053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220511164516-84527
	I0511 17:04:45.713750    2053 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/config.json ...
	I0511 17:04:45.714256    2053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 17:04:45.714352    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:45.844185    2053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65340 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511164516-84527/id_rsa Username:docker}
	I0511 17:04:45.923787    2053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0511 17:04:45.928733    2053 start.go:134] duration metric: createHost completed in 39.605993294s
	I0511 17:04:45.928751    2053 start.go:81] releasing machines lock for "calico-20220511164516-84527", held for 39.606119175s
	I0511 17:04:45.928863    2053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220511164516-84527
	I0511 17:04:46.046968    2053 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0511 17:04:46.046976    2053 ssh_runner.go:195] Run: systemctl --version
	I0511 17:04:46.047053    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:46.047051    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:46.189642    2053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65340 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511164516-84527/id_rsa Username:docker}
	I0511 17:04:46.189775    2053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65340 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511164516-84527/id_rsa Username:docker}
	I0511 17:04:46.275862    2053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0511 17:04:46.414841    2053 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 17:04:46.429286    2053 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0511 17:04:46.429381    2053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0511 17:04:46.442839    2053 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0511 17:04:46.459335    2053 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0511 17:04:46.527087    2053 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0511 17:04:46.599322    2053 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 17:04:46.612400    2053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0511 17:04:46.677421    2053 ssh_runner.go:195] Run: sudo systemctl start docker
	I0511 17:04:46.688278    2053 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 17:04:46.733306    2053 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 17:04:46.841108    2053 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0511 17:04:46.841210    2053 cli_runner.go:164] Run: docker exec -t calico-20220511164516-84527 dig +short host.docker.internal
	I0511 17:04:47.140693    2053 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0511 17:04:47.140824    2053 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0511 17:04:47.151606    2053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 17:04:47.173963    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:04:47.341385    2053 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 17:04:47.341492    2053 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 17:04:47.384646    2053 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 17:04:47.384665    2053 docker.go:541] Images already preloaded, skipping extraction
	I0511 17:04:47.384828    2053 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 17:04:47.425394    2053 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 17:04:47.425422    2053 cache_images.go:84] Images are preloaded, skipping loading
	I0511 17:04:47.425570    2053 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0511 17:04:47.581096    2053 cni.go:95] Creating CNI manager for "calico"
	I0511 17:04:47.581123    2053 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0511 17:04:47.581147    2053 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220511164516-84527 NodeName:calico-20220511164516-84527 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0511 17:04:47.581266    2053 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220511164516-84527"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0511 17:04:47.581355    2053 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220511164516-84527 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:calico-20220511164516-84527 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0511 17:04:47.581465    2053 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0511 17:04:47.595434    2053 binaries.go:44] Found k8s binaries, skipping transfer
	I0511 17:04:47.595512    2053 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0511 17:04:47.615977    2053 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0511 17:04:47.635310    2053 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0511 17:04:47.662913    2053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0511 17:04:47.703699    2053 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0511 17:04:47.716065    2053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 17:04:47.740654    2053 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527 for IP: 192.168.58.2
	I0511 17:04:47.740820    2053 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key
	I0511 17:04:47.740880    2053 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key
	I0511 17:04:47.740935    2053 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/client.key
	I0511 17:04:47.740953    2053 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/client.crt with IP's: []
	I0511 17:04:47.860278    2053 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/client.crt ...
	I0511 17:04:47.860296    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/client.crt: {Name:mkdb61a07b9a66c5fd9f74e353aa37f760d8eec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:04:47.861489    2053 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/client.key ...
	I0511 17:04:47.861514    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/client.key: {Name:mkfda4f1674af806f5874011e2a826b14c4ee3ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:04:47.862526    2053 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.key.cee25041
	I0511 17:04:47.862554    2053 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0511 17:04:47.983167    2053 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.crt.cee25041 ...
	I0511 17:04:47.983183    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.crt.cee25041: {Name:mkd231c9351d79592d15bd470935f72d96d89cc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:04:47.984175    2053 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.key.cee25041 ...
	I0511 17:04:47.984190    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.key.cee25041: {Name:mk15e33d6a53ede0b6829b0238cc2469c157def8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:04:47.984770    2053 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.crt
	I0511 17:04:47.984958    2053 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.key
	I0511 17:04:47.985145    2053 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/proxy-client.key
	I0511 17:04:47.985166    2053 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/proxy-client.crt with IP's: []
	I0511 17:04:48.305518    2053 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/proxy-client.crt ...
	I0511 17:04:48.305534    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/proxy-client.crt: {Name:mk311b4abc583b4ea3a40a59a8fd30b0759d5488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:04:48.306779    2053 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/proxy-client.key ...
	I0511 17:04:48.306794    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/proxy-client.key: {Name:mk5889254ea0c90f9ad1d8810f9cd38ff7253004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:04:48.307483    2053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/84527.pem (1338 bytes)
	W0511 17:04:48.307538    2053 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/84527_empty.pem, impossibly tiny 0 bytes
	I0511 17:04:48.307557    2053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem (1679 bytes)
	I0511 17:04:48.307609    2053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem (1082 bytes)
	I0511 17:04:48.307647    2053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem (1123 bytes)
	I0511 17:04:48.307688    2053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem (1679 bytes)
	I0511 17:04:48.307766    2053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem (1708 bytes)
	I0511 17:04:48.308292    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0511 17:04:48.331975    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0511 17:04:48.367107    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0511 17:04:48.396114    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/calico-20220511164516-84527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0511 17:04:48.415171    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0511 17:04:48.458403    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0511 17:04:48.488921    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0511 17:04:48.509223    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0511 17:04:48.533419    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/84527.pem --> /usr/share/ca-certificates/84527.pem (1338 bytes)
	I0511 17:04:48.563880    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem --> /usr/share/ca-certificates/845272.pem (1708 bytes)
	I0511 17:04:48.596309    2053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0511 17:04:48.619234    2053 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0511 17:04:48.645471    2053 ssh_runner.go:195] Run: openssl version
	I0511 17:04:48.657157    2053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84527.pem && ln -fs /usr/share/ca-certificates/84527.pem /etc/ssl/certs/84527.pem"
	I0511 17:04:48.669607    2053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84527.pem
	I0511 17:04:48.676860    2053 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:00 /usr/share/ca-certificates/84527.pem
	I0511 17:04:48.676941    2053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84527.pem
	I0511 17:04:48.686846    2053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84527.pem /etc/ssl/certs/51391683.0"
	I0511 17:04:48.703123    2053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845272.pem && ln -fs /usr/share/ca-certificates/845272.pem /etc/ssl/certs/845272.pem"
	I0511 17:04:48.716561    2053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845272.pem
	I0511 17:04:48.723401    2053 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:00 /usr/share/ca-certificates/845272.pem
	I0511 17:04:48.723477    2053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845272.pem
	I0511 17:04:48.733202    2053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/845272.pem /etc/ssl/certs/3ec20f2e.0"
	I0511 17:04:48.750661    2053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0511 17:04:48.767042    2053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0511 17:04:48.774999    2053 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I0511 17:04:48.775130    2053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0511 17:04:48.786316    2053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0511 17:04:48.800236    2053 kubeadm.go:391] StartCluster: {Name:calico-20220511164516-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220511164516-84527 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false}
	I0511 17:04:48.800373    2053 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0511 17:04:48.853756    2053 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0511 17:04:48.870924    2053 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0511 17:04:48.883414    2053 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0511 17:04:48.883514    2053 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0511 17:04:48.893405    2053 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0511 17:04:48.893438    2053 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0511 17:04:49.613759    2053 out.go:204]   - Generating certificates and keys ...
	I0511 17:04:53.726918    2053 out.go:204]   - Booting up control plane ...
	I0511 17:05:02.761170    2053 out.go:204]   - Configuring RBAC rules ...
	I0511 17:05:03.146336    2053 cni.go:95] Creating CNI manager for "calico"
	I0511 17:05:03.189890    2053 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0511 17:05:03.275296    2053 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0511 17:05:03.275312    2053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0511 17:05:03.314545    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0511 17:05:04.415053    2053 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.100484916s)
	I0511 17:05:04.415078    2053 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0511 17:05:04.415192    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:04.415204    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=calico-20220511164516-84527 minikube.k8s.io/updated_at=2022_05_11T17_05_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:04.511048    2053 ops.go:34] apiserver oom_adj: -16
	I0511 17:05:04.511147    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:05.081729    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:05.581761    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:06.076890    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:06.581729    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:07.081778    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:07.581792    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:08.081756    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:08.581932    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:09.081744    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:09.581766    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:10.081753    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:10.581811    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:11.081769    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:11.581784    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:12.081797    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:12.581851    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:13.081838    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:13.581739    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:14.081795    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:14.581776    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:15.081779    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:15.581770    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:16.081810    2053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:05:16.157692    2053 kubeadm.go:1020] duration metric: took 11.74254207s to wait for elevateKubeSystemPrivileges.
	I0511 17:05:16.157707    2053 kubeadm.go:393] StartCluster complete in 27.357580017s
	I0511 17:05:16.157723    2053 settings.go:142] acquiring lock: {Name:mk1c460769e7c664507a8af69f45c0543d2c3117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:05:16.157830    2053 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 17:05:16.158572    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig: {Name:mkf471860c8603bccffa01a67a121482d1a42c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:05:16.707517    2053 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220511164516-84527" rescaled to 1
	I0511 17:05:16.707553    2053 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 17:05:16.707577    2053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0511 17:05:16.750491    2053 out.go:177] * Verifying Kubernetes components...
	I0511 17:05:16.707609    2053 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0511 17:05:16.707777    2053 config.go:178] Loaded profile config "calico-20220511164516-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 17:05:16.814713    2053 addons.go:65] Setting storage-provisioner=true in profile "calico-20220511164516-84527"
	I0511 17:05:16.814722    2053 addons.go:65] Setting default-storageclass=true in profile "calico-20220511164516-84527"
	I0511 17:05:16.814730    2053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 17:05:16.814752    2053 addons.go:153] Setting addon storage-provisioner=true in "calico-20220511164516-84527"
	I0511 17:05:16.814760    2053 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220511164516-84527"
	W0511 17:05:16.814766    2053 addons.go:165] addon storage-provisioner should already be in state true
	I0511 17:05:16.814821    2053 host.go:66] Checking if "calico-20220511164516-84527" exists ...
	I0511 17:05:16.816014    2053 cli_runner.go:164] Run: docker container inspect calico-20220511164516-84527 --format={{.State.Status}}
	I0511 17:05:16.816036    2053 cli_runner.go:164] Run: docker container inspect calico-20220511164516-84527 --format={{.State.Status}}
	I0511 17:05:16.827178    2053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0511 17:05:16.850707    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:05:17.046116    2053 addons.go:153] Setting addon default-storageclass=true in "calico-20220511164516-84527"
	W0511 17:05:17.080505    2053 addons.go:165] addon default-storageclass should already be in state true
	I0511 17:05:17.080496    2053 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0511 17:05:17.080541    2053 host.go:66] Checking if "calico-20220511164516-84527" exists ...
	I0511 17:05:17.111479    2053 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 17:05:17.111496    2053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0511 17:05:17.111662    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:05:17.112650    2053 cli_runner.go:164] Run: docker container inspect calico-20220511164516-84527 --format={{.State.Status}}
	I0511 17:05:17.117145    2053 node_ready.go:35] waiting up to 5m0s for node "calico-20220511164516-84527" to be "Ready" ...
	I0511 17:05:17.139115    2053 node_ready.go:49] node "calico-20220511164516-84527" has status "Ready":"True"
	I0511 17:05:17.139130    2053 node_ready.go:38] duration metric: took 21.956363ms waiting for node "calico-20220511164516-84527" to be "Ready" ...
	I0511 17:05:17.139139    2053 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 17:05:17.150466    2053 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace to be "Ready" ...
	I0511 17:05:17.276372    2053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65340 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511164516-84527/id_rsa Username:docker}
	I0511 17:05:17.276421    2053 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0511 17:05:17.276430    2053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0511 17:05:17.276505    2053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220511164516-84527
	I0511 17:05:17.418260    2053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65340 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/calico-20220511164516-84527/id_rsa Username:docker}
	I0511 17:05:17.425531    2053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 17:05:17.615498    2053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0511 17:05:18.358626    2053 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.531408923s)
	I0511 17:05:18.358650    2053 start.go:815] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0511 17:05:18.446479    2053 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0511 17:05:18.509931    2053 addons.go:417] enableAddons completed in 1.802318845s
	I0511 17:05:19.170758    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-11 17:05:16 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0511 17:05:21.171365    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:23.171415    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:25.173870    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:27.668952    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:30.167206    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:32.671434    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:35.170807    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:37.667592    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:39.667662    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:41.668761    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:44.169448    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:46.677268    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:49.178310    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:51.179180    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:53.677800    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:56.174288    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:05:58.176547    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:00.667385    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:02.675349    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:04.677258    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:06.678499    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:09.177208    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:11.675536    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:13.676801    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:16.166724    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:18.167893    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:20.174626    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:22.667638    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:24.669563    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:26.673072    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:28.675494    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:31.170070    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:33.666803    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:35.668515    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:37.669925    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:40.178929    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:42.668234    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:45.172862    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:47.668963    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:50.167408    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:52.668501    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:55.167556    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:57.668468    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:06:59.671367    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:01.673648    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:04.170732    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:06.171949    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:08.667886    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:11.167029    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:13.171361    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:15.172238    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:17.668103    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:19.677650    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:22.172619    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:24.668733    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:26.670141    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:28.677725    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:31.169717    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:33.177052    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:35.678164    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:37.678321    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:39.679241    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:41.753833    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:44.174981    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:46.666877    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:48.668547    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:50.668743    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:52.670356    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:55.183762    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:57.668659    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:07:59.678068    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:02.168937    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:04.171404    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:06.172221    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:08.676040    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:11.169968    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:13.170082    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:15.171244    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:17.670059    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:20.168477    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:22.668632    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:24.672129    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:26.673655    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:29.168629    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:31.669150    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:33.669587    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:35.676686    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:37.677259    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:39.679042    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:42.173058    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:44.669608    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:46.679274    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:49.177776    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:51.673891    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:54.170641    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:56.669280    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:08:59.173034    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:01.669901    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:04.169855    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:06.671391    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:09.170915    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:11.668918    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:13.674213    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:16.168279    2053 pod_ready.go:102] pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:17.176030    2053 pod_ready.go:81] duration metric: took 4m0.022930595s waiting for pod "calico-kube-controllers-8594699699-xkbbm" in "kube-system" namespace to be "Ready" ...
	E0511 17:09:17.176042    2053 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0511 17:09:17.176049    2053 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-k7qdx" in "kube-system" namespace to be "Ready" ...
	I0511 17:09:19.195203    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:21.687979    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:23.694382    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:26.191514    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:28.691691    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:30.696579    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:33.191295    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:35.690503    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:37.691356    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:40.195059    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:42.691382    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:45.190923    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:47.695538    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:50.196499    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:52.689585    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:55.193829    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:09:57.691064    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:00.196746    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:02.689417    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:04.689508    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:06.689957    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:09.191165    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:11.192845    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:13.193334    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:15.690854    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:18.191259    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:20.200309    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:22.690291    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:24.691244    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:26.695321    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:29.190184    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:31.192973    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:33.196986    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:35.690774    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:38.193060    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:40.691458    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:43.191810    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:45.194321    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:47.691199    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:50.196580    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:52.697736    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:55.198756    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:10:57.689329    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:00.195329    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:02.195854    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:04.198925    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:06.689752    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:08.698010    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:11.196649    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:13.689202    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:15.692010    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:17.693887    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:20.196251    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:22.689900    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:24.695111    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:27.196248    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:29.692558    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:32.189451    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:34.191010    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:36.697927    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:39.196949    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:41.199241    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:43.695297    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:45.696160    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:48.192686    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:50.691370    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:52.696630    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:55.198778    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:57.692672    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:11:59.694151    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:02.197621    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:04.691872    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:06.692748    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:09.193420    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:11.196743    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:13.692205    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:15.693178    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:18.191971    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:20.194701    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:22.691023    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:25.193817    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:27.197946    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:29.691873    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:32.191466    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:34.691493    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:36.695836    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:39.196201    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:41.697964    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:44.193155    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:46.691879    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:48.698079    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:51.193947    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:53.695822    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:56.198227    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:12:58.699603    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:13:01.192636    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:13:03.192879    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:13:05.201231    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:13:07.691563    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:13:09.692674    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:13:11.693165    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:13:14.198157    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:13:16.199354    2053 pod_ready.go:102] pod "calico-node-k7qdx" in "kube-system" namespace has status "Ready":"False"
	I0511 17:13:17.198204    2053 pod_ready.go:81] duration metric: took 4m0.019457915s waiting for pod "calico-node-k7qdx" in "kube-system" namespace to be "Ready" ...
	E0511 17:13:17.198215    2053 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0511 17:13:17.198230    2053 pod_ready.go:38] duration metric: took 8m0.053778731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 17:13:17.224994    2053 out.go:177] 
	W0511 17:13:17.250679    2053 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0511 17:13:17.250693    2053 out.go:239] * 
	* 
	W0511 17:13:17.251218    2053 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0511 17:13:17.339609    2053 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (552.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (353.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:08:46.650710   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:47.479029   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 17:08:51.333351   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:51.771049   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127919653s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:09:02.011722   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.109655493s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:09:22.493757   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:09:32.294704   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139048671s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.11459186s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:10:03.456137   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137245897s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.111043909s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:10:42.412991   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:42.418160   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:42.428345   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:42.448874   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:42.496311   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:42.576981   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:42.741380   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:43.062728   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:43.708659   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:44.994799   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:47.556499   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:10:52.680542   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121147464s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 17:10:54.216092   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:11:02.920857   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125545554s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 17:11:23.401829   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:11:25.378895   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123610407s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 17:12:01.159685   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 17:12:04.365385   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:12:16.096856   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:16.102507   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:16.112696   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:16.141074   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:16.181336   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:16.261990   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:16.426707   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:16.746834   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:17.393596   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:18.673740   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:21.237421   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.118832708s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 17:12:26.362835   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:12:35.268218   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 17:12:36.603141   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:12:57.089977   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137130174s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 17:13:10.312488   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142818922s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (353.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (330.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
E0511 17:13:41.496002   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:13:47.473337   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 17:14:09.221748   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : exit status 80 (5m30.665263715s)

                                                
                                                
-- stdout --
	* [kindnet-20220511164516-84527] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13639
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kindnet-20220511164516-84527 in cluster kindnet-20220511164516-84527
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 17:13:40.639328    3143 out.go:296] Setting OutFile to fd 1 ...
	I0511 17:13:40.639480    3143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 17:13:40.639486    3143 out.go:309] Setting ErrFile to fd 2...
	I0511 17:13:40.639490    3143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 17:13:40.639586    3143 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 17:13:40.639922    3143 out.go:303] Setting JSON to false
	I0511 17:13:40.656428    3143 start.go:115] hostinfo: {"hostname":"37310.local","uptime":29595,"bootTime":1652284825,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0511 17:13:40.656516    3143 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 17:13:40.682371    3143 out.go:177] * [kindnet-20220511164516-84527] minikube v1.25.2 on Darwin 11.2.3
	I0511 17:13:40.730700    3143 notify.go:193] Checking for updates...
	I0511 17:13:40.756125    3143 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 17:13:40.782446    3143 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 17:13:40.808487    3143 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0511 17:13:40.834183    3143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 17:13:40.860653    3143 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 17:13:40.887360    3143 config.go:178] Loaded profile config "enable-default-cni-20220511164515-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 17:13:40.887472    3143 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 17:13:40.989271    3143 docker.go:137] docker version: linux-20.10.6
	I0511 17:13:40.989439    3143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 17:13:41.181387    3143 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-12 00:13:41.11191368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 17:13:41.230169    3143 out.go:177] * Using the docker driver based on user configuration
	I0511 17:13:41.256055    3143 start.go:284] selected driver: docker
	I0511 17:13:41.256087    3143 start.go:801] validating driver "docker" against <nil>
	I0511 17:13:41.256118    3143 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 17:13:41.260024    3143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 17:13:41.452910    3143 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-12 00:13:41.38431688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 17:13:41.453610    3143 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0511 17:13:41.453948    3143 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0511 17:13:41.480932    3143 out.go:177] * Using Docker Desktop driver with the root privilege
	I0511 17:13:41.506684    3143 cni.go:95] Creating CNI manager for "kindnet"
	I0511 17:13:41.506732    3143 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0511 17:13:41.506746    3143 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0511 17:13:41.506758    3143 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0511 17:13:41.506781    3143 start_flags.go:306] config:
	{Name:kindnet-20220511164516-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220511164516-84527 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 17:13:41.533919    3143 out.go:177] * Starting control plane node kindnet-20220511164516-84527 in cluster kindnet-20220511164516-84527
	I0511 17:13:41.586645    3143 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 17:13:41.612834    3143 out.go:177] * Pulling base image ...
	I0511 17:13:41.660548    3143 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 17:13:41.660548    3143 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 17:13:41.660628    3143 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 17:13:41.660648    3143 cache.go:57] Caching tarball of preloaded images
	I0511 17:13:41.660838    3143 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0511 17:13:41.661345    3143 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0511 17:13:41.661881    3143 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/config.json ...
	I0511 17:13:41.661959    3143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/config.json: {Name:mkfc97122337ca8c659038e00a6214bf7653c455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:13:41.788024    3143 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 17:13:41.788044    3143 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0511 17:13:41.788056    3143 cache.go:206] Successfully downloaded all kic artifacts
	I0511 17:13:41.788112    3143 start.go:352] acquiring machines lock for kindnet-20220511164516-84527: {Name:mkc42a2e08924ed29f8193e1db1d5c6464d9776c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 17:13:41.789246    3143 start.go:356] acquired machines lock for "kindnet-20220511164516-84527" in 1.117597ms
	I0511 17:13:41.789282    3143 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220511164516-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220511164516-84527 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 17:13:41.789355    3143 start.go:131] createHost starting for "" (driver="docker")
	I0511 17:13:41.815566    3143 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0511 17:13:41.815944    3143 start.go:165] libmachine.API.Create for "kindnet-20220511164516-84527" (driver="docker")
	I0511 17:13:41.816001    3143 client.go:168] LocalClient.Create starting
	I0511 17:13:41.816150    3143 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem
	I0511 17:13:41.836750    3143 main.go:134] libmachine: Decoding PEM data...
	I0511 17:13:41.836796    3143 main.go:134] libmachine: Parsing certificate...
	I0511 17:13:41.836946    3143 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem
	I0511 17:13:41.837033    3143 main.go:134] libmachine: Decoding PEM data...
	I0511 17:13:41.837054    3143 main.go:134] libmachine: Parsing certificate...
	I0511 17:13:41.838240    3143 cli_runner.go:164] Run: docker network inspect kindnet-20220511164516-84527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0511 17:13:41.979757    3143 cli_runner.go:211] docker network inspect kindnet-20220511164516-84527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0511 17:13:41.979870    3143 network_create.go:272] running [docker network inspect kindnet-20220511164516-84527] to gather additional debugging logs...
	I0511 17:13:41.979886    3143 cli_runner.go:164] Run: docker network inspect kindnet-20220511164516-84527
	W0511 17:13:42.101856    3143 cli_runner.go:211] docker network inspect kindnet-20220511164516-84527 returned with exit code 1
	I0511 17:13:42.101878    3143 network_create.go:275] error running [docker network inspect kindnet-20220511164516-84527]: docker network inspect kindnet-20220511164516-84527: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220511164516-84527
	I0511 17:13:42.101909    3143 network_create.go:277] output of [docker network inspect kindnet-20220511164516-84527]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220511164516-84527
	
	** /stderr **
	I0511 17:13:42.102011    3143 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0511 17:13:42.219884    3143 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000518248] misses:0}
	I0511 17:13:42.219924    3143 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 17:13:42.219941    3143 network_create.go:115] attempt to create docker network kindnet-20220511164516-84527 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0511 17:13:42.220034    3143 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220511164516-84527
	W0511 17:13:42.339080    3143 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220511164516-84527 returned with exit code 1
	W0511 17:13:42.339126    3143 network_create.go:107] failed to create docker network kindnet-20220511164516-84527 192.168.49.0/24, will retry: subnet is taken
	I0511 17:13:42.339361    3143 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000518248] amended:false}} dirty:map[] misses:0}
	I0511 17:13:42.339375    3143 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 17:13:42.339549    3143 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000518248] amended:true}} dirty:map[192.168.49.0:0xc000518248 192.168.58.0:0xc00059a1b8] misses:0}
	I0511 17:13:42.339564    3143 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 17:13:42.339571    3143 network_create.go:115] attempt to create docker network kindnet-20220511164516-84527 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0511 17:13:42.339651    3143 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220511164516-84527
	I0511 17:13:48.831651    3143 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220511164516-84527: (6.491882193s)
	I0511 17:13:48.831677    3143 network_create.go:99] docker network kindnet-20220511164516-84527 192.168.58.0/24 created
	I0511 17:13:48.831693    3143 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20220511164516-84527" container
	I0511 17:13:48.831803    3143 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0511 17:13:48.953749    3143 cli_runner.go:164] Run: docker volume create kindnet-20220511164516-84527 --label name.minikube.sigs.k8s.io=kindnet-20220511164516-84527 --label created_by.minikube.sigs.k8s.io=true
	I0511 17:13:49.073457    3143 oci.go:103] Successfully created a docker volume kindnet-20220511164516-84527
	I0511 17:13:49.073586    3143 cli_runner.go:164] Run: docker run --rm --name kindnet-20220511164516-84527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220511164516-84527 --entrypoint /usr/bin/test -v kindnet-20220511164516-84527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0511 17:13:49.623409    3143 oci.go:107] Successfully prepared a docker volume kindnet-20220511164516-84527
	I0511 17:13:49.623459    3143 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 17:13:49.623473    3143 kic.go:179] Starting extracting preloaded images to volume ...
	I0511 17:13:49.623614    3143 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220511164516-84527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0511 17:13:54.406608    3143 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220511164516-84527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (4.782874339s)
	I0511 17:13:54.406631    3143 kic.go:188] duration metric: took 4.783104 seconds to extract preloaded images to volume
	I0511 17:13:54.406746    3143 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0511 17:13:54.613390    3143 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220511164516-84527 --name kindnet-20220511164516-84527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220511164516-84527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220511164516-84527 --network kindnet-20220511164516-84527 --ip 192.168.58.2 --volume kindnet-20220511164516-84527:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0511 17:14:05.690889    3143 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220511164516-84527 --name kindnet-20220511164516-84527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220511164516-84527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220511164516-84527 --network kindnet-20220511164516-84527 --ip 192.168.58.2 --volume kindnet-20220511164516-84527:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (11.077256849s)
	I0511 17:14:05.691030    3143 cli_runner.go:164] Run: docker container inspect kindnet-20220511164516-84527 --format={{.State.Running}}
	I0511 17:14:05.826103    3143 cli_runner.go:164] Run: docker container inspect kindnet-20220511164516-84527 --format={{.State.Status}}
	I0511 17:14:05.948868    3143 cli_runner.go:164] Run: docker exec kindnet-20220511164516-84527 stat /var/lib/dpkg/alternatives/iptables
	I0511 17:14:06.131532    3143 oci.go:247] the created container "kindnet-20220511164516-84527" has a running status.
	I0511 17:14:06.131572    3143 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kindnet-20220511164516-84527/id_rsa...
	I0511 17:14:06.238518    3143 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kindnet-20220511164516-84527/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0511 17:14:06.427316    3143 cli_runner.go:164] Run: docker container inspect kindnet-20220511164516-84527 --format={{.State.Status}}
	I0511 17:14:06.559566    3143 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0511 17:14:06.559586    3143 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220511164516-84527 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0511 17:14:06.737381    3143 cli_runner.go:164] Run: docker container inspect kindnet-20220511164516-84527 --format={{.State.Status}}
	I0511 17:14:06.861227    3143 machine.go:88] provisioning docker machine ...
	I0511 17:14:06.861275    3143 ubuntu.go:169] provisioning hostname "kindnet-20220511164516-84527"
	I0511 17:14:06.861388    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:07.027899    3143 main.go:134] libmachine: Using SSH client type: native
	I0511 17:14:07.028158    3143 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 50945 <nil> <nil>}
	I0511 17:14:07.028177    3143 main.go:134] libmachine: About to run SSH command:
	sudo hostname kindnet-20220511164516-84527 && echo "kindnet-20220511164516-84527" | sudo tee /etc/hostname
	I0511 17:14:07.029920    3143 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0511 17:14:10.158356    3143 main.go:134] libmachine: SSH cmd err, output: <nil>: kindnet-20220511164516-84527
	
	I0511 17:14:10.158480    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:10.282895    3143 main.go:134] libmachine: Using SSH client type: native
	I0511 17:14:10.283052    3143 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 50945 <nil> <nil>}
	I0511 17:14:10.283068    3143 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220511164516-84527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220511164516-84527/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220511164516-84527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0511 17:14:10.393408    3143 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0511 17:14:10.393430    3143 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube}
	I0511 17:14:10.393459    3143 ubuntu.go:177] setting up certificates
	I0511 17:14:10.393467    3143 provision.go:83] configureAuth start
	I0511 17:14:10.393557    3143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220511164516-84527
	I0511 17:14:10.516662    3143 provision.go:138] copyHostCerts
	I0511 17:14:10.516752    3143 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem, removing ...
	I0511 17:14:10.516761    3143 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem
	I0511 17:14:10.516853    3143 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem (1679 bytes)
	I0511 17:14:10.517041    3143 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem, removing ...
	I0511 17:14:10.517053    3143 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem
	I0511 17:14:10.517112    3143 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem (1082 bytes)
	I0511 17:14:10.517256    3143 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem, removing ...
	I0511 17:14:10.517262    3143 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem
	I0511 17:14:10.517319    3143 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem (1123 bytes)
	I0511 17:14:10.517439    3143 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220511164516-84527 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220511164516-84527]
	I0511 17:14:10.708093    3143 provision.go:172] copyRemoteCerts
	I0511 17:14:10.708151    3143 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0511 17:14:10.708212    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:10.830347    3143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50945 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kindnet-20220511164516-84527/id_rsa Username:docker}
	I0511 17:14:10.910424    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0511 17:14:10.928213    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0511 17:14:10.946543    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0511 17:14:10.964615    3143 provision.go:86] duration metric: configureAuth took 571.125619ms
	I0511 17:14:10.964628    3143 ubuntu.go:193] setting minikube options for container-runtime
	I0511 17:14:10.964775    3143 config.go:178] Loaded profile config "kindnet-20220511164516-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 17:14:10.964848    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:11.085561    3143 main.go:134] libmachine: Using SSH client type: native
	I0511 17:14:11.085724    3143 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 50945 <nil> <nil>}
	I0511 17:14:11.085735    3143 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0511 17:14:11.196827    3143 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0511 17:14:11.196838    3143 ubuntu.go:71] root file system type: overlay
	I0511 17:14:11.196992    3143 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0511 17:14:11.197096    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:11.316199    3143 main.go:134] libmachine: Using SSH client type: native
	I0511 17:14:11.316354    3143 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 50945 <nil> <nil>}
	I0511 17:14:11.316403    3143 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0511 17:14:11.431983    3143 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0511 17:14:11.432095    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:11.555704    3143 main.go:134] libmachine: Using SSH client type: native
	I0511 17:14:11.555866    3143 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 50945 <nil> <nil>}
	I0511 17:14:11.555881    3143 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0511 17:14:41.125146    3143 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 00:14:11.436956038 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0511 17:14:41.125176    3143 machine.go:91] provisioned docker machine in 34.263542721s
	I0511 17:14:41.125195    3143 client.go:171] LocalClient.Create took 59.308519841s
	I0511 17:14:41.125210    3143 start.go:173] duration metric: libmachine.API.Create for "kindnet-20220511164516-84527" took 59.308604156s
	I0511 17:14:41.125219    3143 start.go:306] post-start starting for "kindnet-20220511164516-84527" (driver="docker")
	I0511 17:14:41.125224    3143 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0511 17:14:41.125315    3143 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0511 17:14:41.125391    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:41.464256    3143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50945 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kindnet-20220511164516-84527/id_rsa Username:docker}
	I0511 17:14:41.562758    3143 ssh_runner.go:195] Run: cat /etc/os-release
	I0511 17:14:41.566740    3143 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0511 17:14:41.566755    3143 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0511 17:14:41.566767    3143 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0511 17:14:41.566774    3143 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0511 17:14:41.566788    3143 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/addons for local assets ...
	I0511 17:14:41.567089    3143 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files for local assets ...
	I0511 17:14:41.567463    3143 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem -> 845272.pem in /etc/ssl/certs
	I0511 17:14:41.567637    3143 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0511 17:14:41.575078    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem --> /etc/ssl/certs/845272.pem (1708 bytes)
	I0511 17:14:41.613730    3143 start.go:309] post-start completed in 488.49479ms
	I0511 17:14:41.614671    3143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220511164516-84527
	I0511 17:14:41.916700    3143 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/config.json ...
	I0511 17:14:41.917155    3143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 17:14:41.917214    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:42.193884    3143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50945 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kindnet-20220511164516-84527/id_rsa Username:docker}
	I0511 17:14:42.272714    3143 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0511 17:14:42.277586    3143 start.go:134] duration metric: createHost completed in 1m0.487546738s
	I0511 17:14:42.277603    3143 start.go:81] releasing machines lock for "kindnet-20220511164516-84527", held for 1m0.487669405s
	I0511 17:14:42.277694    3143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220511164516-84527
	I0511 17:14:42.595411    3143 ssh_runner.go:195] Run: systemctl --version
	I0511 17:14:42.595482    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:42.595993    3143 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0511 17:14:42.596246    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:42.742990    3143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50945 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kindnet-20220511164516-84527/id_rsa Username:docker}
	I0511 17:14:42.743049    3143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50945 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kindnet-20220511164516-84527/id_rsa Username:docker}
	I0511 17:14:42.951013    3143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0511 17:14:42.961457    3143 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 17:14:42.972868    3143 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0511 17:14:42.972930    3143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0511 17:14:42.983695    3143 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0511 17:14:42.998849    3143 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0511 17:14:43.069394    3143 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0511 17:14:43.131892    3143 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 17:14:43.143271    3143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0511 17:14:43.206133    3143 ssh_runner.go:195] Run: sudo systemctl start docker
	I0511 17:14:43.218913    3143 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 17:14:43.257911    3143 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 17:14:43.344349    3143 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0511 17:14:43.344514    3143 cli_runner.go:164] Run: docker exec -t kindnet-20220511164516-84527 dig +short host.docker.internal
	I0511 17:14:43.549356    3143 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0511 17:14:43.549947    3143 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0511 17:14:43.556930    3143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 17:14:43.571089    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:14:43.778134    3143 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0511 17:14:43.804091    3143 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 17:14:43.804194    3143 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 17:14:43.835921    3143 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 17:14:43.835936    3143 docker.go:541] Images already preloaded, skipping extraction
	I0511 17:14:43.836046    3143 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 17:14:43.867952    3143 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 17:14:43.867974    3143 cache_images.go:84] Images are preloaded, skipping loading
	I0511 17:14:43.868108    3143 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0511 17:14:43.949967    3143 cni.go:95] Creating CNI manager for "kindnet"
	I0511 17:14:43.949992    3143 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0511 17:14:43.950007    3143 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220511164516-84527 NodeName:kindnet-20220511164516-84527 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0511 17:14:43.950122    3143 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20220511164516-84527"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0511 17:14:43.950187    3143 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220511164516-84527 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220511164516-84527 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0511 17:14:43.950268    3143 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0511 17:14:43.961568    3143 binaries.go:44] Found k8s binaries, skipping transfer
	I0511 17:14:43.961653    3143 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0511 17:14:43.972283    3143 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (406 bytes)
	I0511 17:14:43.987317    3143 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0511 17:14:44.009104    3143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0511 17:14:44.027762    3143 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0511 17:14:44.033543    3143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 17:14:44.045163    3143 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527 for IP: 192.168.58.2
	I0511 17:14:44.045279    3143 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key
	I0511 17:14:44.045340    3143 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key
	I0511 17:14:44.045398    3143 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/client.key
	I0511 17:14:44.045414    3143 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/client.crt with IP's: []
	I0511 17:14:44.304568    3143 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/client.crt ...
	I0511 17:14:44.304593    3143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/client.crt: {Name:mka53574ffaa90db310cf71bc6ecb9367cf7b5b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:14:44.305851    3143 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/client.key ...
	I0511 17:14:44.305867    3143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/client.key: {Name:mk4a7bef7534910ee9b60d19ff3ee82840eaef29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:14:44.306663    3143 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.key.cee25041
	I0511 17:14:44.306709    3143 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0511 17:14:44.490381    3143 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.crt.cee25041 ...
	I0511 17:14:44.490405    3143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.crt.cee25041: {Name:mk75f3ff8170101f7569fedae5fd95e1556ead06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:14:44.491197    3143 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.key.cee25041 ...
	I0511 17:14:44.491214    3143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.key.cee25041: {Name:mk956d3651c881ab17cff1f447e07b65e9395574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:14:44.492124    3143 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.crt
	I0511 17:14:44.492315    3143 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.key
	I0511 17:14:44.492475    3143 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/proxy-client.key
	I0511 17:14:44.492498    3143 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/proxy-client.crt with IP's: []
	I0511 17:14:44.637220    3143 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/proxy-client.crt ...
	I0511 17:14:44.637236    3143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/proxy-client.crt: {Name:mk57f0ac1c71401b7e955ad2e4a0a6a6be13ca4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:14:44.637981    3143 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/proxy-client.key ...
	I0511 17:14:44.637991    3143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/proxy-client.key: {Name:mk7402a059df7bdd5bdd324ff7678398b383212e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:14:44.638866    3143 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/84527.pem (1338 bytes)
	W0511 17:14:44.638923    3143 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/84527_empty.pem, impossibly tiny 0 bytes
	I0511 17:14:44.638953    3143 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem (1679 bytes)
	I0511 17:14:44.638990    3143 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem (1082 bytes)
	I0511 17:14:44.639032    3143 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem (1123 bytes)
	I0511 17:14:44.639081    3143 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem (1679 bytes)
	I0511 17:14:44.639191    3143 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem (1708 bytes)
	I0511 17:14:44.639736    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0511 17:14:44.660469    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0511 17:14:44.681523    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0511 17:14:44.700738    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kindnet-20220511164516-84527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0511 17:14:44.720414    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0511 17:14:44.741859    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0511 17:14:44.762061    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0511 17:14:44.779805    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0511 17:14:44.798055    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem --> /usr/share/ca-certificates/845272.pem (1708 bytes)
	I0511 17:14:44.815129    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0511 17:14:44.832884    3143 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/84527.pem --> /usr/share/ca-certificates/84527.pem (1338 bytes)
	I0511 17:14:44.850440    3143 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0511 17:14:44.865097    3143 ssh_runner.go:195] Run: openssl version
	I0511 17:14:44.870701    3143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84527.pem && ln -fs /usr/share/ca-certificates/84527.pem /etc/ssl/certs/84527.pem"
	I0511 17:14:44.878927    3143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84527.pem
	I0511 17:14:44.882911    3143 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:00 /usr/share/ca-certificates/84527.pem
	I0511 17:14:44.882963    3143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84527.pem
	I0511 17:14:44.888430    3143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84527.pem /etc/ssl/certs/51391683.0"
	I0511 17:14:44.896327    3143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845272.pem && ln -fs /usr/share/ca-certificates/845272.pem /etc/ssl/certs/845272.pem"
	I0511 17:14:44.904180    3143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845272.pem
	I0511 17:14:44.908108    3143 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:00 /usr/share/ca-certificates/845272.pem
	I0511 17:14:44.908156    3143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845272.pem
	I0511 17:14:44.913721    3143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/845272.pem /etc/ssl/certs/3ec20f2e.0"
	I0511 17:14:44.921434    3143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0511 17:14:44.929154    3143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0511 17:14:44.933001    3143 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I0511 17:14:44.933044    3143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0511 17:14:44.938647    3143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0511 17:14:44.946389    3143 kubeadm.go:391] StartCluster: {Name:kindnet-20220511164516-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220511164516-84527 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 17:14:44.946508    3143 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0511 17:14:44.974565    3143 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0511 17:14:44.982459    3143 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0511 17:14:44.989686    3143 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0511 17:14:44.989743    3143 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0511 17:14:44.997109    3143 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0511 17:14:44.997133    3143 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0511 17:14:45.530493    3143 out.go:204]   - Generating certificates and keys ...
	I0511 17:14:47.659527    3143 out.go:204]   - Booting up control plane ...
	I0511 17:14:56.731973    3143 out.go:204]   - Configuring RBAC rules ...
	I0511 17:14:57.197581    3143 cni.go:95] Creating CNI manager for "kindnet"
	I0511 17:14:57.260005    3143 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0511 17:14:57.295399    3143 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0511 17:14:57.300662    3143 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0511 17:14:57.300673    3143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0511 17:14:57.315051    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0511 17:14:57.933171    3143 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0511 17:14:57.933267    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:14:57.933268    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=kindnet-20220511164516-84527 minikube.k8s.io/updated_at=2022_05_11T17_14_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:14:57.942603    3143 ops.go:34] apiserver oom_adj: -16
	I0511 17:14:58.024418    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:14:58.594391    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:14:59.090070    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:14:59.591092    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:00.091904    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:00.590337    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:01.090467    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:01.591895    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:02.090115    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:02.590629    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:03.091744    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:03.590301    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:04.090217    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:04.590621    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:05.090203    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:05.591233    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:06.090276    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:06.591458    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:07.090224    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:07.590241    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:08.093962    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:08.590484    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:09.090218    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:09.590233    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:10.090231    3143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:15:10.180817    3143 kubeadm.go:1020] duration metric: took 12.247494902s to wait for elevateKubeSystemPrivileges.
	I0511 17:15:10.180835    3143 kubeadm.go:393] StartCluster complete in 25.234172151s
	I0511 17:15:10.180858    3143 settings.go:142] acquiring lock: {Name:mk1c460769e7c664507a8af69f45c0543d2c3117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:15:10.180960    3143 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 17:15:10.181623    3143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig: {Name:mkf471860c8603bccffa01a67a121482d1a42c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:15:10.705993    3143 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220511164516-84527" rescaled to 1
	I0511 17:15:10.706036    3143 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 17:15:10.706041    3143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0511 17:15:10.706067    3143 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0511 17:15:10.747664    3143 out.go:177] * Verifying Kubernetes components...
	I0511 17:15:10.706214    3143 config.go:178] Loaded profile config "kindnet-20220511164516-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 17:15:10.747733    3143 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220511164516-84527"
	I0511 17:15:10.747733    3143 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220511164516-84527"
	I0511 17:15:10.780959    3143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0511 17:15:10.807524    3143 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220511164516-84527"
	W0511 17:15:10.807539    3143 addons.go:165] addon storage-provisioner should already be in state true
	I0511 17:15:10.807541    3143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 17:15:10.807539    3143 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220511164516-84527"
	I0511 17:15:10.807595    3143 host.go:66] Checking if "kindnet-20220511164516-84527" exists ...
	I0511 17:15:10.808020    3143 cli_runner.go:164] Run: docker container inspect kindnet-20220511164516-84527 --format={{.State.Status}}
	I0511 17:15:10.808612    3143 cli_runner.go:164] Run: docker container inspect kindnet-20220511164516-84527 --format={{.State.Status}}
	I0511 17:15:10.824413    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:15:10.976807    3143 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220511164516-84527"
	I0511 17:15:11.017744    3143 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0511 17:15:10.994695    3143 start.go:815] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	W0511 17:15:11.017760    3143 addons.go:165] addon default-storageclass should already be in state true
	I0511 17:15:11.044558    3143 host.go:66] Checking if "kindnet-20220511164516-84527" exists ...
	I0511 17:15:11.044608    3143 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 17:15:11.044618    3143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0511 17:15:11.044694    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:15:11.045772    3143 cli_runner.go:164] Run: docker container inspect kindnet-20220511164516-84527 --format={{.State.Status}}
	I0511 17:15:11.047690    3143 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220511164516-84527" to be "Ready" ...
	I0511 17:15:11.191710    3143 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0511 17:15:11.191723    3143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0511 17:15:11.191719    3143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50945 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kindnet-20220511164516-84527/id_rsa Username:docker}
	I0511 17:15:11.191799    3143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220511164516-84527
	I0511 17:15:11.282180    3143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 17:15:11.320202    3143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50945 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kindnet-20220511164516-84527/id_rsa Username:docker}
	I0511 17:15:11.422706    3143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0511 17:15:11.583483    3143 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0511 17:15:11.614589    3143 addons.go:417] enableAddons completed in 908.491715ms
	I0511 17:15:13.056242    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:15.057158    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:17.560749    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:20.055897    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:22.057850    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:24.061543    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:26.558453    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:29.057144    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:31.059095    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:33.557835    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:36.057085    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:38.058061    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:40.556385    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:42.557616    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:45.062092    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:47.557291    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:50.058346    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:52.556925    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:54.557956    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:57.061597    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:15:59.558718    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:01.559458    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:04.059155    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:06.061501    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:08.062358    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:10.556736    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:12.557170    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:14.559606    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:16.560900    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:19.059446    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:21.560334    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:24.063350    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:26.065637    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:28.557731    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:30.560555    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:32.560759    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:35.059550    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:37.559524    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:40.065964    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:42.560129    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:45.066540    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:47.556912    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:49.563175    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:52.065723    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:54.557535    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:56.559237    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:16:59.058922    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:01.059343    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:03.060557    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:05.564071    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:08.066466    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:10.559739    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:13.058380    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:15.062343    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:17.562661    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:20.059765    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:22.560731    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:25.063317    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:27.559966    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:29.563396    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:32.059209    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:34.059414    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:36.063911    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:38.563147    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:41.060546    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:43.065643    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:45.567296    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:48.062810    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:50.558942    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:52.561865    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:55.060069    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:57.060290    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:17:59.061507    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:01.564316    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:04.065075    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:06.562453    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:09.059536    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:11.066694    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:13.560672    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:16.061592    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:18.569101    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:21.062159    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:23.565204    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:26.059378    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:28.560021    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:31.062964    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:33.563335    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:35.567310    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:38.060806    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:40.563287    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:43.062205    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:45.560332    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:48.063429    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:50.561645    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:52.565084    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:55.066343    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:18:57.559759    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:19:00.067168    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:19:02.561237    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:19:04.565398    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:19:07.065724    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:19:09.569216    3143 node_ready.go:58] node "kindnet-20220511164516-84527" has status "Ready":"False"
	I0511 17:19:11.073445    3143 node_ready.go:38] duration metric: took 4m0.010615918s waiting for node "kindnet-20220511164516-84527" to be "Ready" ...
	I0511 17:19:11.115710    3143 out.go:177] 
	W0511 17:19:11.141793    3143 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0511 17:19:11.141812    3143 out.go:239] * 
	* 
	W0511 17:19:11.142863    3143 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0511 17:19:11.215779    3143 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (330.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (335.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123346349s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:17:01.162419   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138981366s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:17:16.098688   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125280599s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:17:35.271420   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147931664s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:17:43.817217   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125887121s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:18:10.311359   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123894308s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:18:34.374207   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:34.381956   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:34.396933   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:34.427113   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:34.467899   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:34.553591   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:34.651720   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 17:18:34.714817   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:35.035122   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:35.675739   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:36.960475   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:39.545934   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:18:41.498636   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:18:44.666269   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146954795s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 17:18:47.478007   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 17:18:54.907111   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:18:58.381110   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.194494638s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120987572s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 17:19:56.388382   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12017922s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 17:20:42.464193   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138328348s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0511 17:21:18.312186   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:21:50.632854   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
E0511 17:22:01.208498   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144526357s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (335.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (342.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220511164515-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubenet-20220511164515-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : signal: killed (5m42.137327552s)

                                                
                                                
-- stdout --
	* [kubenet-20220511164515-84527] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13639
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubenet-20220511164515-84527 in cluster kubenet-20220511164515-84527
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 17:19:33.214757    3872 out.go:296] Setting OutFile to fd 1 ...
	I0511 17:19:33.214991    3872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 17:19:33.214996    3872 out.go:309] Setting ErrFile to fd 2...
	I0511 17:19:33.215000    3872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 17:19:33.215108    3872 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 17:19:33.215434    3872 out.go:303] Setting JSON to false
	I0511 17:19:33.231785    3872 start.go:115] hostinfo: {"hostname":"37310.local","uptime":29948,"bootTime":1652284825,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0511 17:19:33.231890    3872 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 17:19:33.258986    3872 out.go:177] * [kubenet-20220511164515-84527] minikube v1.25.2 on Darwin 11.2.3
	I0511 17:19:33.306985    3872 notify.go:193] Checking for updates...
	I0511 17:19:33.332610    3872 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 17:19:33.360452    3872 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 17:19:33.385908    3872 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0511 17:19:33.411856    3872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 17:19:33.437828    3872 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 17:19:33.464367    3872 config.go:178] Loaded profile config "bridge-20220511164515-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 17:19:33.464474    3872 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 17:19:33.564665    3872 docker.go:137] docker version: linux-20.10.6
	I0511 17:19:33.564821    3872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 17:19:33.748750    3872 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-12 00:19:33.677170352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 17:19:33.797610    3872 out.go:177] * Using the docker driver based on user configuration
	I0511 17:19:33.823399    3872 start.go:284] selected driver: docker
	I0511 17:19:33.823426    3872 start.go:801] validating driver "docker" against <nil>
	I0511 17:19:33.823459    3872 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 17:19:33.827229    3872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 17:19:34.010902    3872 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-12 00:19:33.939354584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 17:19:34.011007    3872 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0511 17:19:34.011149    3872 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0511 17:19:34.037865    3872 out.go:177] * Using Docker Desktop driver with the root privilege
	I0511 17:19:34.063651    3872 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0511 17:19:34.063680    3872 start_flags.go:306] config:
	{Name:kubenet-20220511164515-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kubenet-20220511164515-84527 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 17:19:34.089615    3872 out.go:177] * Starting control plane node kubenet-20220511164515-84527 in cluster kubenet-20220511164515-84527
	I0511 17:19:34.115523    3872 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 17:19:34.141664    3872 out.go:177] * Pulling base image ...
	I0511 17:19:34.188701    3872 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 17:19:34.188718    3872 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 17:19:34.188793    3872 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 17:19:34.188814    3872 cache.go:57] Caching tarball of preloaded images
	I0511 17:19:34.189044    3872 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0511 17:19:34.189075    3872 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0511 17:19:34.190127    3872 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/config.json ...
	I0511 17:19:34.190256    3872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/config.json: {Name:mk294aa7b93aa2d1ae8fdce69003cf62caacf900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:19:34.310938    3872 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 17:19:34.310976    3872 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0511 17:19:34.310989    3872 cache.go:206] Successfully downloaded all kic artifacts
	I0511 17:19:34.311042    3872 start.go:352] acquiring machines lock for kubenet-20220511164515-84527: {Name:mk96ba4da2a2c9b831d5ecf948bb2235a5a1cddf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 17:19:34.311179    3872 start.go:356] acquired machines lock for "kubenet-20220511164515-84527" in 124.467µs
	I0511 17:19:34.311209    3872 start.go:91] Provisioning new machine with config: &{Name:kubenet-20220511164515-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kubenet-20220511164515-84527 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 17:19:34.311273    3872 start.go:131] createHost starting for "" (driver="docker")
	I0511 17:19:34.358724    3872 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0511 17:19:34.358897    3872 start.go:165] libmachine.API.Create for "kubenet-20220511164515-84527" (driver="docker")
	I0511 17:19:34.358921    3872 client.go:168] LocalClient.Create starting
	I0511 17:19:34.359008    3872 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem
	I0511 17:19:34.359055    3872 main.go:134] libmachine: Decoding PEM data...
	I0511 17:19:34.359068    3872 main.go:134] libmachine: Parsing certificate...
	I0511 17:19:34.359123    3872 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem
	I0511 17:19:34.359154    3872 main.go:134] libmachine: Decoding PEM data...
	I0511 17:19:34.359164    3872 main.go:134] libmachine: Parsing certificate...
	I0511 17:19:34.359798    3872 cli_runner.go:164] Run: docker network inspect kubenet-20220511164515-84527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0511 17:19:34.476344    3872 cli_runner.go:211] docker network inspect kubenet-20220511164515-84527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0511 17:19:34.476448    3872 network_create.go:272] running [docker network inspect kubenet-20220511164515-84527] to gather additional debugging logs...
	I0511 17:19:34.476475    3872 cli_runner.go:164] Run: docker network inspect kubenet-20220511164515-84527
	W0511 17:19:34.592856    3872 cli_runner.go:211] docker network inspect kubenet-20220511164515-84527 returned with exit code 1
	I0511 17:19:34.592881    3872 network_create.go:275] error running [docker network inspect kubenet-20220511164515-84527]: docker network inspect kubenet-20220511164515-84527: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220511164515-84527
	I0511 17:19:34.592898    3872 network_create.go:277] output of [docker network inspect kubenet-20220511164515-84527]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220511164515-84527
	
	** /stderr **
	I0511 17:19:34.592993    3872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0511 17:19:34.712858    3872 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000620ab0] misses:0}
	I0511 17:19:34.712895    3872 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 17:19:34.712912    3872 network_create.go:115] attempt to create docker network kubenet-20220511164515-84527 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0511 17:19:34.713001    3872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220511164515-84527
	W0511 17:19:34.830161    3872 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220511164515-84527 returned with exit code 1
	W0511 17:19:34.830204    3872 network_create.go:107] failed to create docker network kubenet-20220511164515-84527 192.168.49.0/24, will retry: subnet is taken
	I0511 17:19:34.830427    3872 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000620ab0] amended:false}} dirty:map[] misses:0}
	I0511 17:19:34.830441    3872 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 17:19:34.830604    3872 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000620ab0] amended:true}} dirty:map[192.168.49.0:0xc000620ab0 192.168.58.0:0xc000620af8] misses:0}
	I0511 17:19:34.830622    3872 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0511 17:19:34.830631    3872 network_create.go:115] attempt to create docker network kubenet-20220511164515-84527 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0511 17:19:34.830708    3872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220511164515-84527
	I0511 17:19:40.881195    3872 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220511164515-84527: (6.048378754s)
	I0511 17:19:40.881216    3872 network_create.go:99] docker network kubenet-20220511164515-84527 192.168.58.0/24 created
	I0511 17:19:40.881230    3872 kic.go:106] calculated static IP "192.168.58.2" for the "kubenet-20220511164515-84527" container
	I0511 17:19:40.881356    3872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0511 17:19:40.997757    3872 cli_runner.go:164] Run: docker volume create kubenet-20220511164515-84527 --label name.minikube.sigs.k8s.io=kubenet-20220511164515-84527 --label created_by.minikube.sigs.k8s.io=true
	I0511 17:19:41.115262    3872 oci.go:103] Successfully created a docker volume kubenet-20220511164515-84527
	I0511 17:19:41.115380    3872 cli_runner.go:164] Run: docker run --rm --name kubenet-20220511164515-84527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220511164515-84527 --entrypoint /usr/bin/test -v kubenet-20220511164515-84527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0511 17:19:41.622290    3872 oci.go:107] Successfully prepared a docker volume kubenet-20220511164515-84527
	I0511 17:19:41.622336    3872 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 17:19:41.622352    3872 kic.go:179] Starting extracting preloaded images to volume ...
	I0511 17:19:41.622461    3872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220511164515-84527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0511 17:19:46.068883    3872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220511164515-84527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (4.445300235s)
	I0511 17:19:46.068906    3872 kic.go:188] duration metric: took 4.445491 seconds to extract preloaded images to volume
	I0511 17:19:46.069044    3872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0511 17:19:46.263264    3872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220511164515-84527 --name kubenet-20220511164515-84527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220511164515-84527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220511164515-84527 --network kubenet-20220511164515-84527 --ip 192.168.58.2 --volume kubenet-20220511164515-84527:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0511 17:19:57.568363    3872 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220511164515-84527 --name kubenet-20220511164515-84527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220511164515-84527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220511164515-84527 --network kubenet-20220511164515-84527 --ip 192.168.58.2 --volume kubenet-20220511164515-84527:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (11.3032716s)
	I0511 17:19:57.569308    3872 cli_runner.go:164] Run: docker container inspect kubenet-20220511164515-84527 --format={{.State.Running}}
	I0511 17:19:57.698936    3872 cli_runner.go:164] Run: docker container inspect kubenet-20220511164515-84527 --format={{.State.Status}}
	I0511 17:19:57.822168    3872 cli_runner.go:164] Run: docker exec kubenet-20220511164515-84527 stat /var/lib/dpkg/alternatives/iptables
	I0511 17:19:57.997028    3872 oci.go:247] the created container "kubenet-20220511164515-84527" has a running status.
	I0511 17:19:57.997063    3872 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kubenet-20220511164515-84527/id_rsa...
	I0511 17:19:58.161465    3872 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kubenet-20220511164515-84527/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0511 17:19:58.348998    3872 cli_runner.go:164] Run: docker container inspect kubenet-20220511164515-84527 --format={{.State.Status}}
	I0511 17:19:58.475477    3872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0511 17:19:58.475495    3872 kic_runner.go:114] Args: [docker exec --privileged kubenet-20220511164515-84527 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0511 17:19:58.654215    3872 cli_runner.go:164] Run: docker container inspect kubenet-20220511164515-84527 --format={{.State.Status}}
	I0511 17:19:58.775172    3872 machine.go:88] provisioning docker machine ...
	I0511 17:19:58.775214    3872 ubuntu.go:169] provisioning hostname "kubenet-20220511164515-84527"
	I0511 17:19:58.775325    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:19:58.895986    3872 main.go:134] libmachine: Using SSH client type: native
	I0511 17:19:58.896236    3872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 52251 <nil> <nil>}
	I0511 17:19:58.896260    3872 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubenet-20220511164515-84527 && echo "kubenet-20220511164515-84527" | sudo tee /etc/hostname
	I0511 17:19:58.897503    3872 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0511 17:20:02.019996    3872 main.go:134] libmachine: SSH cmd err, output: <nil>: kubenet-20220511164515-84527
	
	I0511 17:20:02.020105    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:02.138073    3872 main.go:134] libmachine: Using SSH client type: native
	I0511 17:20:02.138245    3872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 52251 <nil> <nil>}
	I0511 17:20:02.138263    3872 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-20220511164515-84527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20220511164515-84527/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-20220511164515-84527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0511 17:20:02.246765    3872 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0511 17:20:02.246794    3872 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube}
	I0511 17:20:02.246816    3872 ubuntu.go:177] setting up certificates
	I0511 17:20:02.246827    3872 provision.go:83] configureAuth start
	I0511 17:20:02.246921    3872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220511164515-84527
	I0511 17:20:02.364337    3872 provision.go:138] copyHostCerts
	I0511 17:20:02.364425    3872 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem, removing ...
	I0511 17:20:02.364433    3872 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem
	I0511 17:20:02.365606    3872 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem (1082 bytes)
	I0511 17:20:02.365800    3872 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem, removing ...
	I0511 17:20:02.365809    3872 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem
	I0511 17:20:02.365870    3872 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem (1123 bytes)
	I0511 17:20:02.366015    3872 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem, removing ...
	I0511 17:20:02.366024    3872 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem
	I0511 17:20:02.366079    3872 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem (1679 bytes)
	I0511 17:20:02.366195    3872 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem org=jenkins.kubenet-20220511164515-84527 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-20220511164515-84527]
	I0511 17:20:02.474700    3872 provision.go:172] copyRemoteCerts
	I0511 17:20:02.474819    3872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0511 17:20:02.474894    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:02.593604    3872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52251 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kubenet-20220511164515-84527/id_rsa Username:docker}
	I0511 17:20:02.674962    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0511 17:20:02.694540    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0511 17:20:02.711447    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0511 17:20:02.732251    3872 provision.go:86] duration metric: configureAuth took 485.366897ms
	I0511 17:20:02.732265    3872 ubuntu.go:193] setting minikube options for container-runtime
	I0511 17:20:02.732423    3872 config.go:178] Loaded profile config "kubenet-20220511164515-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 17:20:02.732500    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:02.851231    3872 main.go:134] libmachine: Using SSH client type: native
	I0511 17:20:02.851379    3872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 52251 <nil> <nil>}
	I0511 17:20:02.851390    3872 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0511 17:20:02.962624    3872 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0511 17:20:02.962635    3872 ubuntu.go:71] root file system type: overlay
	I0511 17:20:02.962817    3872 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0511 17:20:02.962906    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:03.082286    3872 main.go:134] libmachine: Using SSH client type: native
	I0511 17:20:03.082442    3872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 52251 <nil> <nil>}
	I0511 17:20:03.082492    3872 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0511 17:20:03.201860    3872 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0511 17:20:03.201979    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:03.341888    3872 main.go:134] libmachine: Using SSH client type: native
	I0511 17:20:03.342095    3872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d20e0] 0x13d5140 <nil>  [] 0s} 127.0.0.1 52251 <nil> <nil>}
	I0511 17:20:03.342109    3872 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0511 17:20:32.162059    3872 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 00:20:03.200359008 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0511 17:20:32.162084    3872 machine.go:91] provisioned docker machine in 33.384946754s
	I0511 17:20:32.162092    3872 client.go:171] LocalClient.Create took 57.795773447s
	I0511 17:20:32.162121    3872 start.go:173] duration metric: libmachine.API.Create for "kubenet-20220511164515-84527" took 57.795825521s
	I0511 17:20:32.162134    3872 start.go:306] post-start starting for "kubenet-20220511164515-84527" (driver="docker")
	I0511 17:20:32.162141    3872 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0511 17:20:32.162249    3872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0511 17:20:32.162338    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:32.280091    3872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52251 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kubenet-20220511164515-84527/id_rsa Username:docker}
	I0511 17:20:32.364381    3872 ssh_runner.go:195] Run: cat /etc/os-release
	I0511 17:20:32.368135    3872 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0511 17:20:32.368151    3872 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0511 17:20:32.368158    3872 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0511 17:20:32.368163    3872 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0511 17:20:32.368173    3872 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/addons for local assets ...
	I0511 17:20:32.368270    3872 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files for local assets ...
	I0511 17:20:32.368668    3872 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem -> 845272.pem in /etc/ssl/certs
	I0511 17:20:32.368842    3872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0511 17:20:32.376350    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem --> /etc/ssl/certs/845272.pem (1708 bytes)
	I0511 17:20:32.393029    3872 start.go:309] post-start completed in 230.876698ms
	I0511 17:20:32.393568    3872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220511164515-84527
	I0511 17:20:32.515418    3872 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/config.json ...
	I0511 17:20:32.515827    3872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 17:20:32.515887    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:32.636593    3872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52251 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kubenet-20220511164515-84527/id_rsa Username:docker}
	I0511 17:20:32.714354    3872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0511 17:20:32.719601    3872 start.go:134] duration metric: createHost completed in 58.400889566s
	I0511 17:20:32.719618    3872 start.go:81] releasing machines lock for "kubenet-20220511164515-84527", held for 58.400996669s
	I0511 17:20:32.719702    3872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220511164515-84527
	I0511 17:20:32.838261    3872 ssh_runner.go:195] Run: systemctl --version
	I0511 17:20:32.838331    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:32.838999    3872 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0511 17:20:32.839159    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:32.965495    3872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52251 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kubenet-20220511164515-84527/id_rsa Username:docker}
	I0511 17:20:32.965623    3872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52251 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kubenet-20220511164515-84527/id_rsa Username:docker}
	I0511 17:20:33.201163    3872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0511 17:20:33.210910    3872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 17:20:33.221370    3872 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0511 17:20:33.221438    3872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0511 17:20:33.230904    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0511 17:20:33.265335    3872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0511 17:20:33.321392    3872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0511 17:20:33.373912    3872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0511 17:20:33.383875    3872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0511 17:20:33.440240    3872 ssh_runner.go:195] Run: sudo systemctl start docker
	I0511 17:20:33.450264    3872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 17:20:33.484849    3872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0511 17:20:33.567322    3872 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0511 17:20:33.567526    3872 cli_runner.go:164] Run: docker exec -t kubenet-20220511164515-84527 dig +short host.docker.internal
	I0511 17:20:33.758199    3872 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0511 17:20:33.759091    3872 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0511 17:20:33.764903    3872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 17:20:33.776131    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:20:33.900928    3872 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 17:20:33.901010    3872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 17:20:33.931378    3872 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 17:20:33.931396    3872 docker.go:541] Images already preloaded, skipping extraction
	I0511 17:20:33.931492    3872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0511 17:20:33.960728    3872 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0511 17:20:33.960746    3872 cache_images.go:84] Images are preloaded, skipping loading
	I0511 17:20:33.960845    3872 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0511 17:20:34.034787    3872 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0511 17:20:34.034807    3872 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0511 17:20:34.034821    3872 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-20220511164515-84527 NodeName:kubenet-20220511164515-84527 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0511 17:20:34.034924    3872 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubenet-20220511164515-84527"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0511 17:20:34.034996    3872 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubenet-20220511164515-84527 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=kubenet --node-ip=192.168.58.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:kubenet-20220511164515-84527 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0511 17:20:34.035072    3872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0511 17:20:34.042645    3872 binaries.go:44] Found k8s binaries, skipping transfer
	I0511 17:20:34.042697    3872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0511 17:20:34.049895    3872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (404 bytes)
	I0511 17:20:34.062113    3872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0511 17:20:34.074889    3872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0511 17:20:34.087030    3872 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0511 17:20:34.090865    3872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0511 17:20:34.101258    3872 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527 for IP: 192.168.58.2
	I0511 17:20:34.101389    3872 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key
	I0511 17:20:34.101454    3872 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key
	I0511 17:20:34.101510    3872 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/client.key
	I0511 17:20:34.101547    3872 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/client.crt with IP's: []
	I0511 17:20:34.147430    3872 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/client.crt ...
	I0511 17:20:34.147446    3872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/client.crt: {Name:mke1f2203e60ed4d310a61d69a1629d912c37c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:20:34.148862    3872 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/client.key ...
	I0511 17:20:34.148871    3872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/client.key: {Name:mk1c8a1540f90d908ce991a27cd7f7e0a29beb01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:20:34.149366    3872 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.key.cee25041
	I0511 17:20:34.149386    3872 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0511 17:20:34.289241    3872 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.crt.cee25041 ...
	I0511 17:20:34.289256    3872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.crt.cee25041: {Name:mkd1d3542fc5b2cd80f291176144d76700be7e56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:20:34.290281    3872 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.key.cee25041 ...
	I0511 17:20:34.290290    3872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.key.cee25041: {Name:mkd15ecf6a4f6f59172884bfff6ecde8c93d4535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:20:34.291004    3872 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.crt
	I0511 17:20:34.291166    3872 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.key
	I0511 17:20:34.291334    3872 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/proxy-client.key
	I0511 17:20:34.291352    3872 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/proxy-client.crt with IP's: []
	I0511 17:20:34.441867    3872 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/proxy-client.crt ...
	I0511 17:20:34.441890    3872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/proxy-client.crt: {Name:mk7616a3cf56289b01cec74302ba9891d9a4a02c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:20:34.443372    3872 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/proxy-client.key ...
	I0511 17:20:34.443395    3872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/proxy-client.key: {Name:mk24c2b2f21b54386af510d52299fc4bffdb9ec3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:20:34.444536    3872 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/84527.pem (1338 bytes)
	W0511 17:20:34.444596    3872 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/84527_empty.pem, impossibly tiny 0 bytes
	I0511 17:20:34.444613    3872 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem (1679 bytes)
	I0511 17:20:34.444655    3872 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem (1082 bytes)
	I0511 17:20:34.444699    3872 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem (1123 bytes)
	I0511 17:20:34.444743    3872 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem (1679 bytes)
	I0511 17:20:34.444849    3872 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem (1708 bytes)
	I0511 17:20:34.445349    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0511 17:20:34.466802    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0511 17:20:34.483372    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0511 17:20:34.499806    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/kubenet-20220511164515-84527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0511 17:20:34.516792    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0511 17:20:34.533043    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0511 17:20:34.550061    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0511 17:20:34.566553    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0511 17:20:34.583070    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0511 17:20:34.600047    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/84527.pem --> /usr/share/ca-certificates/84527.pem (1338 bytes)
	I0511 17:20:34.616676    3872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/845272.pem --> /usr/share/ca-certificates/845272.pem (1708 bytes)
	I0511 17:20:34.633078    3872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0511 17:20:34.646985    3872 ssh_runner.go:195] Run: openssl version
	I0511 17:20:34.653474    3872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84527.pem && ln -fs /usr/share/ca-certificates/84527.pem /etc/ssl/certs/84527.pem"
	I0511 17:20:34.662147    3872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84527.pem
	I0511 17:20:34.666591    3872 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:00 /usr/share/ca-certificates/84527.pem
	I0511 17:20:34.666638    3872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84527.pem
	I0511 17:20:34.672469    3872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84527.pem /etc/ssl/certs/51391683.0"
	I0511 17:20:34.680994    3872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845272.pem && ln -fs /usr/share/ca-certificates/845272.pem /etc/ssl/certs/845272.pem"
	I0511 17:20:34.689297    3872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845272.pem
	I0511 17:20:34.693816    3872 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:00 /usr/share/ca-certificates/845272.pem
	I0511 17:20:34.693868    3872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845272.pem
	I0511 17:20:34.700188    3872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/845272.pem /etc/ssl/certs/3ec20f2e.0"
	I0511 17:20:34.708994    3872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0511 17:20:34.717365    3872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0511 17:20:34.721534    3872 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I0511 17:20:34.721581    3872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0511 17:20:34.727078    3872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0511 17:20:34.735072    3872 kubeadm.go:391] StartCluster: {Name:kubenet-20220511164515-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kubenet-20220511164515-84527 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false}
	I0511 17:20:34.735191    3872 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0511 17:20:34.765816    3872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0511 17:20:34.773679    3872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0511 17:20:34.781335    3872 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0511 17:20:34.781386    3872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0511 17:20:34.788876    3872 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0511 17:20:34.788901    3872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0511 17:20:35.296695    3872 out.go:204]   - Generating certificates and keys ...
	I0511 17:20:38.163055    3872 out.go:204]   - Booting up control plane ...
	I0511 17:20:53.193373    3872 out.go:204]   - Configuring RBAC rules ...
	I0511 17:20:53.576978    3872 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0511 17:20:53.577007    3872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0511 17:20:53.577103    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=kubenet-20220511164515-84527 minikube.k8s.io/updated_at=2022_05_11T17_20_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:53.577104    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:53.630944    3872 ops.go:34] apiserver oom_adj: -16
	I0511 17:20:53.631045    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:54.240417    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:54.742559    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:55.245500    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:55.748961    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:56.240442    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:56.742451    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:57.245462    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:57.741686    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:58.242309    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:58.745551    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:59.249032    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:20:59.740808    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:00.240723    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:00.749319    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:01.244452    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:01.740638    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:02.248241    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:02.741538    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:03.241092    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:03.748320    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:04.240884    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:04.746999    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:05.248516    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:05.742662    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:06.242402    3872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0511 17:21:06.297316    3872 kubeadm.go:1020] duration metric: took 12.71996019s to wait for elevateKubeSystemPrivileges.
	I0511 17:21:06.297330    3872 kubeadm.go:393] StartCluster complete in 31.561374018s
	I0511 17:21:06.297350    3872 settings.go:142] acquiring lock: {Name:mk1c460769e7c664507a8af69f45c0543d2c3117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:21:06.297449    3872 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 17:21:06.298195    3872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig: {Name:mkf471860c8603bccffa01a67a121482d1a42c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 17:21:06.817304    3872 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubenet-20220511164515-84527" rescaled to 1
	I0511 17:21:06.817338    3872 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0511 17:21:06.817354    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0511 17:21:06.817383    3872 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0511 17:21:06.846693    3872 out.go:177] * Verifying Kubernetes components...
	I0511 17:21:06.817503    3872 config.go:178] Loaded profile config "kubenet-20220511164515-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 17:21:06.846756    3872 addons.go:65] Setting default-storageclass=true in profile "kubenet-20220511164515-84527"
	I0511 17:21:06.846758    3872 addons.go:65] Setting storage-provisioner=true in profile "kubenet-20220511164515-84527"
	I0511 17:21:06.871442    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0511 17:21:06.872832    3872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-20220511164515-84527"
	I0511 17:21:06.872835    3872 addons.go:153] Setting addon storage-provisioner=true in "kubenet-20220511164515-84527"
	W0511 17:21:06.872862    3872 addons.go:165] addon storage-provisioner should already be in state true
	I0511 17:21:06.872865    3872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 17:21:06.872916    3872 host.go:66] Checking if "kubenet-20220511164515-84527" exists ...
	I0511 17:21:06.873248    3872 cli_runner.go:164] Run: docker container inspect kubenet-20220511164515-84527 --format={{.State.Status}}
	I0511 17:21:06.873316    3872 cli_runner.go:164] Run: docker container inspect kubenet-20220511164515-84527 --format={{.State.Status}}
	I0511 17:21:06.901225    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:21:07.070765    3872 addons.go:153] Setting addon default-storageclass=true in "kubenet-20220511164515-84527"
	W0511 17:21:07.097688    3872 addons.go:165] addon default-storageclass should already be in state true
	I0511 17:21:07.097692    3872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0511 17:21:07.097737    3872 host.go:66] Checking if "kubenet-20220511164515-84527" exists ...
	I0511 17:21:07.112370    3872 node_ready.go:35] waiting up to 5m0s for node "kubenet-20220511164515-84527" to be "Ready" ...
	I0511 17:21:07.140798    3872 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 17:21:07.140812    3872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0511 17:21:07.140932    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:21:07.143060    3872 cli_runner.go:164] Run: docker container inspect kubenet-20220511164515-84527 --format={{.State.Status}}
	I0511 17:21:07.148890    3872 node_ready.go:49] node "kubenet-20220511164515-84527" has status "Ready":"True"
	I0511 17:21:07.148921    3872 node_ready.go:38] duration metric: took 8.161633ms waiting for node "kubenet-20220511164515-84527" to be "Ready" ...
	I0511 17:21:07.148929    3872 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 17:21:07.177554    3872 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-2jswj" in "kube-system" namespace to be "Ready" ...
	I0511 17:21:07.293291    3872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52251 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kubenet-20220511164515-84527/id_rsa Username:docker}
	I0511 17:21:07.296508    3872 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0511 17:21:07.296522    3872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0511 17:21:07.296617    3872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220511164515-84527
	I0511 17:21:07.439991    3872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52251 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/kubenet-20220511164515-84527/id_rsa Username:docker}
	I0511 17:21:07.494662    3872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0511 17:21:07.719970    3872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0511 17:21:08.271098    3872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.398248969s)
	I0511 17:21:08.271126    3872 start.go:815] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0511 17:21:08.328259    3872 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0511 17:21:08.384317    3872 addons.go:417] enableAddons completed in 1.566886498s
	I0511 17:21:09.204318    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:11.705917    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:14.199823    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:16.704630    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:19.199049    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:21.205299    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:23.701273    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:25.702108    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:28.203418    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:30.702388    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:33.201852    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:35.701251    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:37.701319    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:39.710339    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:42.202545    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:44.702135    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:46.705715    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:49.208893    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:51.701838    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:54.202286    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:56.701724    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:21:58.701897    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:01.202318    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:03.205527    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:05.206697    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:07.706021    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:10.201004    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:12.201827    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:14.707151    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:17.203477    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:19.702748    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:22.201830    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:24.203665    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:26.708879    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:29.201956    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:31.206499    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:33.702774    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:35.703954    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:38.206761    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:40.703117    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:42.703850    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:45.202487    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:47.205883    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:49.701074    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:51.702971    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:53.703234    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:55.711811    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:22:58.210709    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:00.705449    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:03.202521    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:05.207172    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:07.208983    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:09.707850    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:12.204680    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:14.204948    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:16.703680    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:18.704619    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:21.202367    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:23.702304    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:25.704278    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:28.203844    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:30.706582    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:33.202998    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:35.204411    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:37.705230    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:40.204810    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:42.205337    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:44.212925    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:46.711214    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:49.211534    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:51.708515    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:54.203135    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:56.203839    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:23:58.205052    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:00.205605    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:02.707401    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:05.204374    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:07.705636    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:10.204580    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:12.205725    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:14.707173    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:17.203649    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:19.206328    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:21.208959    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:23.709881    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:26.206452    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:28.707624    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:31.209120    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:33.705230    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:35.705657    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:37.705894    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:39.711081    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:42.204112    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:44.204290    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:46.205647    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:48.211975    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:50.705212    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:52.708640    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:55.211977    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:57.708064    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:24:59.711672    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:25:02.204941    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:25:04.207217    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:25:06.211453    3872 pod_ready.go:102] pod "coredns-64897985d-2jswj" in "kube-system" namespace has status "Ready":"False"
	I0511 17:25:07.209215    3872 pod_ready.go:81] duration metric: took 4m0.025658868s waiting for pod "coredns-64897985d-2jswj" in "kube-system" namespace to be "Ready" ...
	E0511 17:25:07.209226    3872 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0511 17:25:07.209237    3872 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-wx5jl" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:07.211085    3872 pod_ready.go:97] error getting pod "coredns-64897985d-wx5jl" in "kube-system" namespace (skipping!): pods "coredns-64897985d-wx5jl" not found
	I0511 17:25:07.211094    3872 pod_ready.go:81] duration metric: took 1.852874ms waiting for pod "coredns-64897985d-wx5jl" in "kube-system" namespace to be "Ready" ...
	E0511 17:25:07.211099    3872 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-wx5jl" in "kube-system" namespace (skipping!): pods "coredns-64897985d-wx5jl" not found
	I0511 17:25:07.211103    3872 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kubenet-20220511164515-84527" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:07.215649    3872 pod_ready.go:92] pod "etcd-kubenet-20220511164515-84527" in "kube-system" namespace has status "Ready":"True"
	I0511 17:25:07.215656    3872 pod_ready.go:81] duration metric: took 4.548797ms waiting for pod "etcd-kubenet-20220511164515-84527" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:07.215662    3872 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kubenet-20220511164515-84527" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:07.220336    3872 pod_ready.go:92] pod "kube-apiserver-kubenet-20220511164515-84527" in "kube-system" namespace has status "Ready":"True"
	I0511 17:25:07.220343    3872 pod_ready.go:81] duration metric: took 4.677926ms waiting for pod "kube-apiserver-kubenet-20220511164515-84527" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:07.220349    3872 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kubenet-20220511164515-84527" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:07.403817    3872 pod_ready.go:92] pod "kube-controller-manager-kubenet-20220511164515-84527" in "kube-system" namespace has status "Ready":"True"
	I0511 17:25:07.403827    3872 pod_ready.go:81] duration metric: took 183.468154ms waiting for pod "kube-controller-manager-kubenet-20220511164515-84527" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:07.403835    3872 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-r5m65" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:07.805867    3872 pod_ready.go:92] pod "kube-proxy-r5m65" in "kube-system" namespace has status "Ready":"True"
	I0511 17:25:07.805879    3872 pod_ready.go:81] duration metric: took 402.029774ms waiting for pod "kube-proxy-r5m65" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:07.805887    3872 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kubenet-20220511164515-84527" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:08.205897    3872 pod_ready.go:92] pod "kube-scheduler-kubenet-20220511164515-84527" in "kube-system" namespace has status "Ready":"True"
	I0511 17:25:08.205907    3872 pod_ready.go:81] duration metric: took 400.004537ms waiting for pod "kube-scheduler-kubenet-20220511164515-84527" in "kube-system" namespace to be "Ready" ...
	I0511 17:25:08.205912    3872 pod_ready.go:38] duration metric: took 4m1.050964823s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0511 17:25:08.205937    3872 api_server.go:51] waiting for apiserver process to appear ...
	I0511 17:25:08.206021    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0511 17:25:08.235997    3872 logs.go:274] 1 containers: [9ffe7a766905]
	I0511 17:25:08.257559    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0511 17:25:08.288130    3872 logs.go:274] 1 containers: [09ce0e635294]
	I0511 17:25:08.288226    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0511 17:25:08.318265    3872 logs.go:274] 1 containers: [c8a9b267f710]
	I0511 17:25:08.318362    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0511 17:25:08.349200    3872 logs.go:274] 1 containers: [0bb54c8ffb7b]
	I0511 17:25:08.349291    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0511 17:25:08.379259    3872 logs.go:274] 1 containers: [d4f916b293e3]
	I0511 17:25:08.379354    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0511 17:25:08.406696    3872 logs.go:274] 0 containers: []
	W0511 17:25:08.406708    3872 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0511 17:25:08.406788    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0511 17:25:08.436228    3872 logs.go:274] 1 containers: [1df653b7313e]
	I0511 17:25:08.436329    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0511 17:25:08.465895    3872 logs.go:274] 1 containers: [efea7b468644]
	I0511 17:25:08.465922    3872 logs.go:123] Gathering logs for Docker ...
	I0511 17:25:08.465933    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0511 17:25:08.480911    3872 logs.go:123] Gathering logs for describe nodes ...
	I0511 17:25:08.480923    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0511 17:25:08.550237    3872 logs.go:123] Gathering logs for kube-apiserver [9ffe7a766905] ...
	I0511 17:25:08.550251    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ffe7a766905"
	I0511 17:25:08.587194    3872 logs.go:123] Gathering logs for coredns [c8a9b267f710] ...
	I0511 17:25:08.587208    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a9b267f710"
	I0511 17:25:08.618729    3872 logs.go:123] Gathering logs for kube-scheduler [0bb54c8ffb7b] ...
	I0511 17:25:08.618744    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb54c8ffb7b"
	I0511 17:25:08.656875    3872 logs.go:123] Gathering logs for kube-proxy [d4f916b293e3] ...
	I0511 17:25:08.656889    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f916b293e3"
	I0511 17:25:08.689450    3872 logs.go:123] Gathering logs for container status ...
	I0511 17:25:08.689464    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0511 17:25:08.724830    3872 logs.go:123] Gathering logs for kubelet ...
	I0511 17:25:08.724845    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0511 17:25:08.797175    3872 logs.go:123] Gathering logs for dmesg ...
	I0511 17:25:08.797192    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0511 17:25:08.818255    3872 logs.go:123] Gathering logs for etcd [09ce0e635294] ...
	I0511 17:25:08.818268    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09ce0e635294"
	I0511 17:25:08.855603    3872 logs.go:123] Gathering logs for storage-provisioner [1df653b7313e] ...
	I0511 17:25:08.855620    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df653b7313e"
	I0511 17:25:08.889183    3872 logs.go:123] Gathering logs for kube-controller-manager [efea7b468644] ...
	I0511 17:25:08.889200    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efea7b468644"
	I0511 17:25:11.433472    3872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0511 17:25:11.445579    3872 api_server.go:71] duration metric: took 4m4.622126358s to wait for apiserver process to appear ...
	I0511 17:25:11.445597    3872 api_server.go:87] waiting for apiserver healthz status ...
	I0511 17:25:11.445679    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0511 17:25:11.475592    3872 logs.go:274] 1 containers: [9ffe7a766905]
	I0511 17:25:11.475686    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0511 17:25:11.506166    3872 logs.go:274] 1 containers: [09ce0e635294]
	I0511 17:25:11.506261    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0511 17:25:11.542251    3872 logs.go:274] 1 containers: [c8a9b267f710]
	I0511 17:25:11.542338    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0511 17:25:11.574025    3872 logs.go:274] 1 containers: [0bb54c8ffb7b]
	I0511 17:25:11.574106    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0511 17:25:11.606486    3872 logs.go:274] 1 containers: [d4f916b293e3]
	I0511 17:25:11.606593    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0511 17:25:11.638835    3872 logs.go:274] 0 containers: []
	W0511 17:25:11.638849    3872 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0511 17:25:11.638968    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0511 17:25:11.670668    3872 logs.go:274] 1 containers: [1df653b7313e]
	I0511 17:25:11.670775    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0511 17:25:11.703670    3872 logs.go:274] 1 containers: [efea7b468644]
	I0511 17:25:11.703716    3872 logs.go:123] Gathering logs for kubelet ...
	I0511 17:25:11.703729    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0511 17:25:11.784216    3872 logs.go:123] Gathering logs for kube-apiserver [9ffe7a766905] ...
	I0511 17:25:11.784234    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ffe7a766905"
	I0511 17:25:11.824651    3872 logs.go:123] Gathering logs for coredns [c8a9b267f710] ...
	I0511 17:25:11.824666    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a9b267f710"
	I0511 17:25:11.860611    3872 logs.go:123] Gathering logs for kube-proxy [d4f916b293e3] ...
	I0511 17:25:11.860629    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f916b293e3"
	I0511 17:25:11.903717    3872 logs.go:123] Gathering logs for storage-provisioner [1df653b7313e] ...
	I0511 17:25:11.903736    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df653b7313e"
	I0511 17:25:11.949418    3872 logs.go:123] Gathering logs for kube-controller-manager [efea7b468644] ...
	I0511 17:25:11.949433    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efea7b468644"
	I0511 17:25:11.997749    3872 logs.go:123] Gathering logs for Docker ...
	I0511 17:25:11.997768    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0511 17:25:12.022764    3872 logs.go:123] Gathering logs for container status ...
	I0511 17:25:12.022786    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0511 17:25:12.066477    3872 logs.go:123] Gathering logs for dmesg ...
	I0511 17:25:12.066492    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0511 17:25:12.090756    3872 logs.go:123] Gathering logs for describe nodes ...
	I0511 17:25:12.090774    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0511 17:25:12.192036    3872 logs.go:123] Gathering logs for etcd [09ce0e635294] ...
	I0511 17:25:12.192061    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09ce0e635294"
	I0511 17:25:12.235348    3872 logs.go:123] Gathering logs for kube-scheduler [0bb54c8ffb7b] ...
	I0511 17:25:12.235363    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb54c8ffb7b"
	I0511 17:25:14.784310    3872 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52257/healthz ...
	I0511 17:25:14.791786    3872 api_server.go:266] https://127.0.0.1:52257/healthz returned 200:
	ok
	I0511 17:25:14.793124    3872 api_server.go:140] control plane version: v1.23.5
	I0511 17:25:14.793135    3872 api_server.go:130] duration metric: took 3.347451336s to wait for apiserver health ...
	I0511 17:25:14.793142    3872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0511 17:25:14.793222    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0511 17:25:14.823573    3872 logs.go:274] 1 containers: [9ffe7a766905]
	I0511 17:25:14.823672    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0511 17:25:14.853501    3872 logs.go:274] 1 containers: [09ce0e635294]
	I0511 17:25:14.853593    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0511 17:25:14.882977    3872 logs.go:274] 1 containers: [c8a9b267f710]
	I0511 17:25:14.883073    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0511 17:25:14.912035    3872 logs.go:274] 1 containers: [0bb54c8ffb7b]
	I0511 17:25:14.912126    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0511 17:25:14.941044    3872 logs.go:274] 1 containers: [d4f916b293e3]
	I0511 17:25:14.941136    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0511 17:25:14.971678    3872 logs.go:274] 0 containers: []
	W0511 17:25:14.971690    3872 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0511 17:25:14.971774    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0511 17:25:15.002897    3872 logs.go:274] 1 containers: [1df653b7313e]
	I0511 17:25:15.002994    3872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0511 17:25:15.032183    3872 logs.go:274] 1 containers: [efea7b468644]
	I0511 17:25:15.032207    3872 logs.go:123] Gathering logs for coredns [c8a9b267f710] ...
	I0511 17:25:15.032216    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a9b267f710"
	I0511 17:25:15.064195    3872 logs.go:123] Gathering logs for container status ...
	I0511 17:25:15.064208    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0511 17:25:15.092557    3872 logs.go:123] Gathering logs for dmesg ...
	I0511 17:25:15.092571    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0511 17:25:15.113201    3872 logs.go:123] Gathering logs for describe nodes ...
	I0511 17:25:15.113214    3872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0511 17:25:15.189205    3872 logs.go:123] Gathering logs for kube-apiserver [9ffe7a766905] ...
	I0511 17:25:15.189219    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ffe7a766905"
	I0511 17:25:15.228175    3872 logs.go:123] Gathering logs for etcd [09ce0e635294] ...
	I0511 17:25:15.228189    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09ce0e635294"
	I0511 17:25:15.263725    3872 logs.go:123] Gathering logs for kube-scheduler [0bb54c8ffb7b] ...
	I0511 17:25:15.263739    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb54c8ffb7b"
	I0511 17:25:15.299496    3872 logs.go:123] Gathering logs for kube-proxy [d4f916b293e3] ...
	I0511 17:25:15.299512    3872 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f916b293e3"

                                                
                                                
** /stderr **
net_test.go:103: failed start: signal: killed
--- FAIL: TestNetworkPlugins/group/kubenet/Start (342.16s)
E0511 17:45:01.064625   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:45:19.275786   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:45:42.512751   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:46:20.362829   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:46:24.125282   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:47:01.262769   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 17:47:16.197246   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:47:35.380003   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 17:48:10.413762   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:48:17.856025   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 17:48:34.481920   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:48:34.751907   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 17:48:41.607949   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:48:47.577129   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 17:49:00.140598   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:50:01.064748   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:50:42.520881   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:51:20.365728   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:51:37.580715   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:52:01.267027   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 17:52:16.199532   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:52:18.485941   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory

                                                
                                    

Test pass (254/280)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.99
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.45
10 TestDownloadOnly/v1.23.5/json-events 7.03
11 TestDownloadOnly/v1.23.5/preload-exists 0
14 TestDownloadOnly/v1.23.5/kubectl 0
15 TestDownloadOnly/v1.23.5/LogsDuration 0.36
17 TestDownloadOnly/v1.23.6-rc.0/json-events 7.3
18 TestDownloadOnly/v1.23.6-rc.0/preload-exists 0
21 TestDownloadOnly/v1.23.6-rc.0/kubectl 0
22 TestDownloadOnly/v1.23.6-rc.0/LogsDuration 0.34
23 TestDownloadOnly/DeleteAll 1.21
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.68
25 TestDownloadOnlyKic 7.57
26 TestBinaryMirror 2.21
27 TestOffline 132.18
29 TestAddons/Setup 135.62
33 TestAddons/parallel/MetricsServer 5.94
34 TestAddons/parallel/HelmTiller 11.39
36 TestAddons/parallel/CSI 38.53
38 TestAddons/serial/GCPAuth 15.08
39 TestAddons/StoppedEnableDisable 18.13
40 TestCertOptions 73.21
41 TestCertExpiration 267.88
42 TestDockerFlags 86.18
43 TestForceSystemdFlag 340.69
44 TestForceSystemdEnv 81
46 TestHyperKitDriverInstallOrUpdate 7.79
49 TestErrorSpam/setup 72.94
50 TestErrorSpam/start 2.96
51 TestErrorSpam/status 1.96
52 TestErrorSpam/pause 2.37
53 TestErrorSpam/unpause 2.66
54 TestErrorSpam/stop 18.23
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 124.17
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 7.47
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 1.84
65 TestFunctional/serial/CacheCmd/cache/add_remote 4.77
66 TestFunctional/serial/CacheCmd/cache/add_local 2.07
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.09
68 TestFunctional/serial/CacheCmd/cache/list 0.08
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.73
70 TestFunctional/serial/CacheCmd/cache/cache_reload 3.18
71 TestFunctional/serial/CacheCmd/cache/delete 0.16
72 TestFunctional/serial/MinikubeKubectlCmd 0.48
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.6
74 TestFunctional/serial/ExtraConfig 32.59
75 TestFunctional/serial/ComponentHealth 0.06
76 TestFunctional/serial/LogsCmd 3.79
77 TestFunctional/serial/LogsFileCmd 3.98
79 TestFunctional/parallel/ConfigCmd 0.5
80 TestFunctional/parallel/DashboardCmd 11.91
81 TestFunctional/parallel/DryRun 1.76
82 TestFunctional/parallel/InternationalLanguage 0.81
83 TestFunctional/parallel/StatusCmd 2.58
86 TestFunctional/parallel/ServiceCmd 16.5
88 TestFunctional/parallel/AddonsCmd 0.34
89 TestFunctional/parallel/PersistentVolumeClaim 27.4
91 TestFunctional/parallel/SSHCmd 1.4
92 TestFunctional/parallel/CpCmd 2.61
93 TestFunctional/parallel/MySQL 21.79
94 TestFunctional/parallel/FileSync 0.67
95 TestFunctional/parallel/CertSync 4.23
99 TestFunctional/parallel/NodeLabels 0.05
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
103 TestFunctional/parallel/Version/short 0.13
104 TestFunctional/parallel/Version/components 1.54
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.51
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.51
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.5
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.46
109 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
110 TestFunctional/parallel/ImageCommands/Setup 2.1
111 TestFunctional/parallel/DockerEnv/bash 2.8
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.79
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.42
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.98
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.41
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.9
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.9
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.17
119 TestFunctional/parallel/ImageCommands/ImageRemove 1.07
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.86
121 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.87
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.96
123 TestFunctional/parallel/ProfileCmd/profile_list 0.76
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.85
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.2
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 3.95
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
135 TestFunctional/parallel/MountCmd/any-port 8.71
136 TestFunctional/parallel/MountCmd/specific-port 3.41
137 TestFunctional/delete_addon-resizer_images 0.29
138 TestFunctional/delete_my-image_image 0.13
139 TestFunctional/delete_minikube_cached_images 0.12
142 TestIngressAddonLegacy/StartLegacyK8sCluster 132.94
144 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.51
145 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.69
149 TestJSONOutput/start/Command 123.43
150 TestJSONOutput/start/Audit 0
152 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/pause/Command 0.78
156 TestJSONOutput/pause/Audit 0
158 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/unpause/Command 0.77
162 TestJSONOutput/unpause/Audit 0
164 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/stop/Command 17.35
168 TestJSONOutput/stop/Audit 0
170 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
172 TestErrorJSONOutput 1.08
174 TestKicCustomNetwork/create_custom_network 87.17
175 TestKicCustomNetwork/use_default_bridge_network 75.16
176 TestKicExistingNetwork 86.76
177 TestKicCustomSubnet 86.54
178 TestMainNoArgs 0.08
181 TestMountStart/serial/StartWithMountFirst 48.48
182 TestMountStart/serial/VerifyMountFirst 0.62
183 TestMountStart/serial/StartWithMountSecond 47.98
184 TestMountStart/serial/VerifyMountSecond 0.68
185 TestMountStart/serial/DeleteFirst 11.78
186 TestMountStart/serial/VerifyMountPostDelete 0.61
187 TestMountStart/serial/Stop 7.49
188 TestMountStart/serial/RestartStopped 30.31
189 TestMountStart/serial/VerifyMountPostStop 0.61
192 TestMultiNode/serial/FreshStart2Nodes 231.87
193 TestMultiNode/serial/DeployApp2Nodes 6.13
194 TestMultiNode/serial/PingHostFrom2Pods 0.87
195 TestMultiNode/serial/AddNode 111.6
196 TestMultiNode/serial/ProfileList 0.71
197 TestMultiNode/serial/CopyFile 23.3
198 TestMultiNode/serial/StopNode 11.54
199 TestMultiNode/serial/StartAfterStop 50.74
200 TestMultiNode/serial/RestartKeepsNodes 250.18
201 TestMultiNode/serial/DeleteNode 17.93
202 TestMultiNode/serial/StopMultiNode 36.16
203 TestMultiNode/serial/RestartMultiNode 147.53
204 TestMultiNode/serial/ValidateNameConflict 100.94
208 TestPreload 205.45
210 TestScheduledStopUnix 153.68
211 TestSkaffold 128.58
213 TestInsufficientStorage 64.11
214 TestRunningBinaryUpgrade 135.75
216 TestKubernetesUpgrade 176.62
217 TestMissingContainerUpgrade 199.09
229 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.92
230 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 12.13
231 TestStoppedBinaryUpgrade/Setup 1.36
232 TestStoppedBinaryUpgrade/Upgrade 141.3
233 TestStoppedBinaryUpgrade/MinikubeLogs 4.32
242 TestPause/serial/Start 107.25
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.52
245 TestNoKubernetes/serial/StartWithK8s 53.84
246 TestNoKubernetes/serial/StartWithStopK8s 22.85
247 TestNoKubernetes/serial/Start 42.78
248 TestPause/serial/SecondStartNoReconfiguration 7.61
249 TestPause/serial/Pause 0.87
250 TestPause/serial/VerifyStatus 0.64
251 TestPause/serial/Unpause 0.84
252 TestPause/serial/PauseAgain 0.93
253 TestPause/serial/DeletePaused 5.4
254 TestPause/serial/VerifyDeletedResources 1.23
255 TestNoKubernetes/serial/VerifyK8sNotRunning 0.64
256 TestNoKubernetes/serial/ProfileList 1.64
257 TestNetworkPlugins/group/auto/Start 109.65
258 TestNoKubernetes/serial/Stop 8.73
259 TestNoKubernetes/serial/StartNoArgs 20.3
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.6
261 TestNetworkPlugins/group/false/Start 106.1
262 TestNetworkPlugins/group/auto/KubeletFlags 0.67
263 TestNetworkPlugins/group/auto/NetCatPod 13
264 TestNetworkPlugins/group/auto/DNS 0.14
265 TestNetworkPlugins/group/auto/Localhost 0.12
266 TestNetworkPlugins/group/auto/HairPin 0.13
267 TestNetworkPlugins/group/cilium/Start 124.57
268 TestNetworkPlugins/group/false/KubeletFlags 0.8
269 TestNetworkPlugins/group/false/NetCatPod 13.89
270 TestNetworkPlugins/group/false/DNS 0.14
271 TestNetworkPlugins/group/false/Localhost 0.12
272 TestNetworkPlugins/group/false/HairPin 5.13
274 TestNetworkPlugins/group/cilium/ControllerPod 5.02
275 TestNetworkPlugins/group/cilium/KubeletFlags 0.65
276 TestNetworkPlugins/group/cilium/NetCatPod 12.66
277 TestNetworkPlugins/group/cilium/DNS 0.14
278 TestNetworkPlugins/group/cilium/Localhost 0.13
279 TestNetworkPlugins/group/cilium/HairPin 0.12
280 TestNetworkPlugins/group/custom-weave/Start 65.33
281 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.66
282 TestNetworkPlugins/group/custom-weave/NetCatPod 12.98
283 TestNetworkPlugins/group/enable-default-cni/Start 57.27
284 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.75
285 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.03
288 TestNetworkPlugins/group/bridge/Start 85.62
289 TestNetworkPlugins/group/bridge/KubeletFlags 0.65
290 TestNetworkPlugins/group/bridge/NetCatPod 15.93
294 TestStartStop/group/old-k8s-version/serial/FirstStart 147.88
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.14
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.81
297 TestStartStop/group/old-k8s-version/serial/Stop 18.74
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.47
299 TestStartStop/group/old-k8s-version/serial/SecondStart 48.75
301 TestStartStop/group/embed-certs/serial/FirstStart 322.15
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 24.02
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 7.27
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.67
305 TestStartStop/group/old-k8s-version/serial/Pause 4.54
307 TestStartStop/group/no-preload/serial/FirstStart 114.64
308 TestStartStop/group/no-preload/serial/DeployApp 10.13
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
310 TestStartStop/group/no-preload/serial/Stop 19.8
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.48
312 TestStartStop/group/no-preload/serial/SecondStart 387.01
313 TestStartStop/group/embed-certs/serial/DeployApp 10.05
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
315 TestStartStop/group/embed-certs/serial/Stop 19.49
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.47
317 TestStartStop/group/embed-certs/serial/SecondStart 611.32
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.96
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.68
321 TestStartStop/group/no-preload/serial/Pause 4.68
323 TestStartStop/group/default-k8s-different-port/serial/FirstStart 331.78
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.9
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.71
327 TestStartStop/group/embed-certs/serial/Pause 5.34
328 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.04
330 TestStartStop/group/newest-cni/serial/FirstStart 60.92
331 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.83
332 TestStartStop/group/default-k8s-different-port/serial/Stop 17.82
333 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.52
334 TestStartStop/group/default-k8s-different-port/serial/SecondStart 580.12
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
337 TestStartStop/group/newest-cni/serial/Stop 19.79
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.48
339 TestStartStop/group/newest-cni/serial/SecondStart 58.34
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.67
343 TestStartStop/group/newest-cni/serial/Pause 4.48
344 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
345 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.94
346 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.66
347 TestStartStop/group/default-k8s-different-port/serial/Pause 4.45
x
+
TestDownloadOnly/v1.16.0/json-events (16.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220511155349-84527 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220511155349-84527 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (16.985770529s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220511155349-84527
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220511155349-84527: exit status 85 (449.810845ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 15:53:49
	Running on machine: 37310
	Binary: Built with gc go1.18.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 15:53:49.913639   84549 out.go:296] Setting OutFile to fd 1 ...
	I0511 15:53:49.913845   84549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 15:53:49.913851   84549 out.go:309] Setting ErrFile to fd 2...
	I0511 15:53:49.913855   84549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 15:53:49.913961   84549 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	W0511 15:53:49.914067   84549 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: no such file or directory
	I0511 15:53:49.914557   84549 out.go:303] Setting JSON to true
	I0511 15:53:49.930683   84549 start.go:115] hostinfo: {"hostname":"37310.local","uptime":24804,"bootTime":1652284825,"procs":370,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0511 15:53:49.930783   84549 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 15:53:49.958737   84549 out.go:97] [download-only-20220511155349-84527] minikube v1.25.2 on Darwin 11.2.3
	I0511 15:53:49.958862   84549 notify.go:193] Checking for updates...
	W0511 15:53:49.958866   84549 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball: no such file or directory
	I0511 15:53:49.984688   84549 out.go:169] MINIKUBE_LOCATION=13639
	I0511 15:53:50.010809   84549 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 15:53:50.036711   84549 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0511 15:53:50.062834   84549 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 15:53:50.088783   84549 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	W0511 15:53:50.140662   84549 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0511 15:53:50.140894   84549 driver.go:358] Setting default libvirt URI to qemu:///system
	W0511 15:53:50.224925   84549 docker.go:113] docker version returned error: exit status 1
	I0511 15:53:50.251346   84549 out.go:97] Using the docker driver based on user configuration
	I0511 15:53:50.251366   84549 start.go:284] selected driver: docker
	I0511 15:53:50.251373   84549 start.go:801] validating driver "docker" against <nil>
	I0511 15:53:50.251510   84549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 15:53:50.441028   84549 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 15:53:50.467765   84549 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0511 15:53:50.514525   84549 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0511 15:53:50.566629   84549 out.go:169] 
	W0511 15:53:50.592588   84549 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0511 15:53:50.618366   84549 out.go:169] 
	I0511 15:53:50.670574   84549 out.go:169] 
	W0511 15:53:50.696333   84549 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0511 15:53:50.696440   84549 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0511 15:53:50.696480   84549 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0511 15:53:50.722523   84549 out.go:169] 
	I0511 15:53:50.748792   84549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 15:53:50.918237   84549 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	W0511 15:53:50.944875   84549 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0511 15:53:50.944946   84549 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0511 15:53:50.996733   84549 out.go:169] 
	W0511 15:53:51.022714   84549 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0511 15:53:51.022819   84549 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0511 15:53:51.022888   84549 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0511 15:53:51.048731   84549 out.go:169] 
	I0511 15:53:51.100591   84549 out.go:169] 
	W0511 15:53:51.126834   84549 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0511 15:53:51.152720   84549 out.go:169] 
	I0511 15:53:51.178581   84549 start_flags.go:373] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0511 15:53:51.178699   84549 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0511 15:53:51.204749   84549 out.go:169] Using Docker Desktop driver with the root privilege
	I0511 15:53:51.230747   84549 cni.go:95] Creating CNI manager for ""
	I0511 15:53:51.230778   84549 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 15:53:51.230795   84549 start_flags.go:306] config:
	{Name:download-only-20220511155349-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220511155349-84527 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 15:53:51.256583   84549 out.go:97] Starting control plane node download-only-20220511155349-84527 in cluster download-only-20220511155349-84527
	I0511 15:53:51.256625   84549 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 15:53:51.282731   84549 out.go:97] Pulling base image ...
	I0511 15:53:51.282765   84549 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0511 15:53:51.282823   84549 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 15:53:51.282927   84549 cache.go:107] acquiring lock: {Name:mke70e8e54844c0a13e742788d66cb18e9d04c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 15:53:51.282934   84549 cache.go:107] acquiring lock: {Name:mk088576273d3ba307854467943c785825168548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 15:53:51.283001   84549 cache.go:107] acquiring lock: {Name:mkd156a2de90ec1ad7f837dd513b5749c8697dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 15:53:51.283015   84549 cache.go:107] acquiring lock: {Name:mk7d52c6c691f1eb094d8439144b86c2e0a29f01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 15:53:51.283053   84549 cache.go:107] acquiring lock: {Name:mk349963ec11ac64a7be0edf993ed8eaba9c8801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 15:53:51.283049   84549 cache.go:107] acquiring lock: {Name:mk52c6fe31da59477c857af1b919db76f6dc30cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 15:53:51.283102   84549 cache.go:107] acquiring lock: {Name:mk90a60c5e3b224eece528aa24f188f50011ede6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 15:53:51.284241   84549 cache.go:107] acquiring lock: {Name:mk3f3e9b007f58892c7663a1b51f8042f2ccf1e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0511 15:53:51.285066   84549 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0511 15:53:51.285081   84549 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0511 15:53:51.285115   84549 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0511 15:53:51.285116   84549 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0511 15:53:51.285120   84549 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0511 15:53:51.285151   84549 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0511 15:53:51.285172   84549 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0511 15:53:51.285179   84549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/download-only-20220511155349-84527/config.json ...
	I0511 15:53:51.285183   84549 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0511 15:53:51.285233   84549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/download-only-20220511155349-84527/config.json: {Name:mk0af71ad1d6e5da50742f755823717008bcaa11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0511 15:53:51.285601   84549 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0511 15:53:51.285976   84549 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0511 15:53:51.285976   84549 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0511 15:53:51.285980   84549 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0511 15:53:51.287248   84549 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0511 15:53:51.287393   84549 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0511 15:53:51.287410   84549 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0511 15:53:51.288567   84549 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0511 15:53:51.288630   84549 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0511 15:53:51.288748   84549 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0511 15:53:51.288901   84549 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0511 15:53:51.289048   84549 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0511 15:53:51.397541   84549 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 15:53:51.397713   84549 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory
	I0511 15:53:51.397832   84549 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 15:53:51.837997   84549 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0511 15:53:51.840417   84549 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0511 15:53:51.876837   84549 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0511 15:53:51.891529   84549 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0511 15:53:51.958995   84549 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0511 15:53:51.959377   84549 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0511 15:53:51.959398   84549 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 676.415687ms
	I0511 15:53:51.959411   84549 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0511 15:53:51.964263   84549 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0511 15:53:51.991323   84549 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0511 15:53:52.004844   84549 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0511 15:53:54.403784   84549 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0511 15:53:54.482664   84549 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0511 15:53:54.482682   84549 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.199668725s
	I0511 15:53:54.482699   84549 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0511 15:53:54.664012   84549 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0511 15:53:54.664035   84549 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 3.380867904s
	I0511 15:53:54.664048   84549 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0511 15:53:55.906395   84549 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0511 15:53:55.906415   84549 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 4.623229159s
	I0511 15:53:55.906424   84549 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0511 15:53:55.925861   84549 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0511 15:53:55.925879   84549 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 4.642736308s
	I0511 15:53:55.925888   84549 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0511 15:53:56.689110   84549 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0511 15:53:56.689133   84549 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 5.406017145s
	I0511 15:53:56.689143   84549 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0511 15:53:56.690879   84549 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0511 15:53:56.690893   84549 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 5.407820256s
	I0511 15:53:56.690902   84549 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0511 15:53:56.927954   84549 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0511 15:53:56.927972   84549 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 5.644814121s
	I0511 15:53:56.927990   84549 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0511 15:53:56.928005   84549 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220511155349-84527"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/json-events (7.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220511155349-84527 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220511155349-84527 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker : (7.030741107s)
--- PASS: TestDownloadOnly/v1.23.5/json-events (7.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/preload-exists
--- PASS: TestDownloadOnly/v1.23.5/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/kubectl
--- PASS: TestDownloadOnly/v1.23.5/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/LogsDuration (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220511155349-84527
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220511155349-84527: exit status 85 (358.708537ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 15:54:18
	Running on machine: 37310
	Binary: Built with gc go1.18.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 15:54:18.186622   84617 out.go:296] Setting OutFile to fd 1 ...
	I0511 15:54:18.186779   84617 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 15:54:18.186784   84617 out.go:309] Setting ErrFile to fd 2...
	I0511 15:54:18.186788   84617 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 15:54:18.186876   84617 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	W0511 15:54:18.186969   84617 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: no such file or directory
	I0511 15:54:18.187123   84617 out.go:303] Setting JSON to true
	I0511 15:54:18.202812   84617 start.go:115] hostinfo: {"hostname":"37310.local","uptime":24833,"bootTime":1652284825,"procs":369,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0511 15:54:18.202886   84617 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 15:54:18.230367   84617 out.go:97] [download-only-20220511155349-84527] minikube v1.25.2 on Darwin 11.2.3
	I0511 15:54:18.230554   84617 notify.go:193] Checking for updates...
	W0511 15:54:18.230620   84617 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball: no such file or directory
	I0511 15:54:18.256055   84617 out.go:169] MINIKUBE_LOCATION=13639
	I0511 15:54:18.282491   84617 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 15:54:18.308532   84617 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0511 15:54:18.334434   84617 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 15:54:18.360336   84617 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	W0511 15:54:18.412321   84617 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0511 15:54:18.413046   84617 config.go:178] Loaded profile config "download-only-20220511155349-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0511 15:54:18.413132   84617 start.go:709] api.Load failed for download-only-20220511155349-84527: filestore "download-only-20220511155349-84527": Docker machine "download-only-20220511155349-84527" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 15:54:18.413206   84617 driver.go:358] Setting default libvirt URI to qemu:///system
	W0511 15:54:18.413253   84617 start.go:709] api.Load failed for download-only-20220511155349-84527: filestore "download-only-20220511155349-84527": Docker machine "download-only-20220511155349-84527" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 15:54:18.509759   84617 docker.go:137] docker version: linux-20.10.6
	I0511 15:54:18.509891   84617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 15:54:18.684625   84617 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-11 22:54:18.622810485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 15:54:18.711630   84617 out.go:97] Using the docker driver based on existing profile
	I0511 15:54:18.711717   84617 start.go:284] selected driver: docker
	I0511 15:54:18.711733   84617 start.go:801] validating driver "docker" against &{Name:download-only-20220511155349-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220511155349-84527 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 15:54:18.712213   84617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 15:54:18.891293   84617 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2022-05-11 22:54:18.829590685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 15:54:18.893259   84617 cni.go:95] Creating CNI manager for ""
	I0511 15:54:18.893276   84617 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 15:54:18.893290   84617 start_flags.go:306] config:
	{Name:download-only-20220511155349-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220511155349-84527 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 15:54:18.919238   84617 out.go:97] Starting control plane node download-only-20220511155349-84527 in cluster download-only-20220511155349-84527
	I0511 15:54:18.919312   84617 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 15:54:18.944945   84617 out.go:97] Pulling base image ...
	I0511 15:54:18.945009   84617 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 15:54:18.945089   84617 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 15:54:19.018117   84617 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 15:54:19.018184   84617 cache.go:57] Caching tarball of preloaded images
	I0511 15:54:19.018424   84617 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 15:54:19.045003   84617 out.go:97] Downloading Kubernetes v1.23.5 preload ...
	I0511 15:54:19.045055   84617 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0511 15:54:19.084007   84617 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 15:54:19.084034   84617 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 15:54:19.084162   84617 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory
	I0511 15:54:19.084181   84617 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory, skipping pull
	I0511 15:54:19.084184   84617 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in cache, skipping pull
	I0511 15:54:19.084193   84617 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a as a tarball
	I0511 15:54:19.143352   84617 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4?checksum=md5:d0fb3d86acaea9a7773bdef3468eac56 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220511155349-84527"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5/LogsDuration (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/json-events (7.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220511155349-84527 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220511155349-84527 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker : (7.296046035s)
--- PASS: TestDownloadOnly/v1.23.6-rc.0/json-events (7.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.6-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.23.6-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220511155349-84527
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220511155349-84527: exit status 85 (339.219322ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 15:54:25
	Running on machine: 37310
	Binary: Built with gc go1.18.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 15:54:25.578371   84651 out.go:296] Setting OutFile to fd 1 ...
	I0511 15:54:25.578580   84651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 15:54:25.578585   84651 out.go:309] Setting ErrFile to fd 2...
	I0511 15:54:25.578589   84651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 15:54:25.578683   84651 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	W0511 15:54:25.578772   84651 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/config/config.json: no such file or directory
	I0511 15:54:25.578902   84651 out.go:303] Setting JSON to true
	I0511 15:54:25.593519   84651 start.go:115] hostinfo: {"hostname":"37310.local","uptime":24840,"bootTime":1652284825,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0511 15:54:25.593648   84651 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 15:54:25.620860   84651 out.go:97] [download-only-20220511155349-84527] minikube v1.25.2 on Darwin 11.2.3
	I0511 15:54:25.620965   84651 notify.go:193] Checking for updates...
	I0511 15:54:25.647300   84651 out.go:169] MINIKUBE_LOCATION=13639
	I0511 15:54:25.673727   84651 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 15:54:25.699804   84651 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0511 15:54:25.725584   84651 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 15:54:25.751910   84651 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	W0511 15:54:25.804187   84651 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0511 15:54:25.804906   84651 config.go:178] Loaded profile config "download-only-20220511155349-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	W0511 15:54:25.804990   84651 start.go:709] api.Load failed for download-only-20220511155349-84527: filestore "download-only-20220511155349-84527": Docker machine "download-only-20220511155349-84527" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 15:54:25.805070   84651 driver.go:358] Setting default libvirt URI to qemu:///system
	W0511 15:54:25.805111   84651 start.go:709] api.Load failed for download-only-20220511155349-84527: filestore "download-only-20220511155349-84527": Docker machine "download-only-20220511155349-84527" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 15:54:25.901421   84651 docker.go:137] docker version: linux-20.10.6
	I0511 15:54:25.901544   84651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 15:54:26.075412   84651 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2022-05-11 22:54:26.020543094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 15:54:26.102507   84651 out.go:97] Using the docker driver based on existing profile
	I0511 15:54:26.102535   84651 start.go:284] selected driver: docker
	I0511 15:54:26.102547   84651 start.go:801] validating driver "docker" against &{Name:download-only-20220511155349-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220511155349-84527 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 15:54:26.103006   84651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 15:54:26.292906   84651 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2022-05-11 22:54:26.236304375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 15:54:26.294896   84651 cni.go:95] Creating CNI manager for ""
	I0511 15:54:26.294914   84651 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 15:54:26.294925   84651 start_flags.go:306] config:
	{Name:download-only-20220511155349-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:download-only-20220511155349-84527 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 15:54:26.322038   84651 out.go:97] Starting control plane node download-only-20220511155349-84527 in cluster download-only-20220511155349-84527
	I0511 15:54:26.322149   84651 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 15:54:26.348708   84651 out.go:97] Pulling base image ...
	I0511 15:54:26.348830   84651 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0511 15:54:26.348894   84651 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 15:54:26.422208   84651 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6-rc.0/preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0511 15:54:26.422229   84651 cache.go:57] Caching tarball of preloaded images
	I0511 15:54:26.422427   84651 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0511 15:54:26.448362   84651 out.go:97] Downloading Kubernetes v1.23.6-rc.0 preload ...
	I0511 15:54:26.448395   84651 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0511 15:54:26.464263   84651 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0511 15:54:26.464287   84651 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 15:54:26.464384   84651 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory
	I0511 15:54:26.464402   84651 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory, skipping pull
	I0511 15:54:26.464406   84651 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in cache, skipping pull
	I0511 15:54:26.464414   84651 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a as a tarball
	I0511 15:54:26.550148   84651 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6-rc.0/preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8c474a02b5d7628fe0abb1816ff0a9c8 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220511155349-84527"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.211107241s)
--- PASS: TestDownloadOnly/DeleteAll (1.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.68s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220511155349-84527
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.68s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220511155435-84527 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220511155435-84527 --force --alsologtostderr --driver=docker : (6.102488337s)
helpers_test.go:175: Cleaning up "download-docker-20220511155435-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220511155435-84527
--- PASS: TestDownloadOnlyKic (7.57s)

                                                
                                    
x
+
TestBinaryMirror (2.21s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220511155443-84527 --alsologtostderr --binary-mirror http://127.0.0.1:50771 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220511155443-84527 --alsologtostderr --binary-mirror http://127.0.0.1:50771 --driver=docker : (1.238711075s)
helpers_test.go:175: Cleaning up "binary-mirror-20220511155443-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220511155443-84527
--- PASS: TestBinaryMirror (2.21s)

                                                
                                    
x
+
TestOffline (132.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220511164515-84527 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220511164515-84527 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m51.290640855s)
helpers_test.go:175: Cleaning up "offline-docker-20220511164515-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220511164515-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220511164515-84527: (20.889288575s)
--- PASS: TestOffline (132.18s)

                                                
                                    
x
+
TestAddons/Setup (135.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220511155445-84527 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220511155445-84527 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m15.619733259s)
--- PASS: TestAddons/Setup (135.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 2.399726ms
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-bd6f4dd56-7j6rw" [b7af553d-1ba4-4b0d-be7d-98396ddcd945] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010705395s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220511155445-84527 top pods -n kube-system
addons_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220511155445-84527 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.94s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.39s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 14.857987ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-57ljx" [2d0adac9-037b-48a1-b9bb-1840ad0726bc] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0159611s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220511155445-84527 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220511155445-84527 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.745032933s)
addons_test.go:440: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220511155445-84527 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 8.574456ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220511155445-84527 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220511155445-84527 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220511155445-84527 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [02d8d55f-1739-4ddc-aea9-7f484d5c2b70] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [02d8d55f-1739-4ddc-aea9-7f484d5c2b70] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [02d8d55f-1739-4ddc-aea9-7f484d5c2b70] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.008089381s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220511155445-84527 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220511155445-84527 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220511155445-84527 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220511155445-84527 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220511155445-84527 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220511155445-84527 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220511155445-84527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220511155445-84527 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [5c15ac25-8dba-4c73-8055-c311dcfbf983] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [5c15ac25-8dba-4c73-8055-c311dcfbf983] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [5c15ac25-8dba-4c73-8055-c311dcfbf983] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.010984234s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220511155445-84527 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220511155445-84527 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220511155445-84527 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220511155445-84527 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220511155445-84527 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.855360368s)
addons_test.go:592: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220511155445-84527 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (15.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220511155445-84527 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5be5ceb7-5e79-4668-b530-9925fab451c4] Pending
helpers_test.go:342: "busybox" [5be5ceb7-5e79-4668-b530-9925fab451c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5be5ceb7-5e79-4668-b530-9925fab451c4] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.011813571s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220511155445-84527 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:628: (dbg) Run:  kubectl --context addons-20220511155445-84527 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220511155445-84527 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220511155445-84527 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220511155445-84527 addons disable gcp-auth --alsologtostderr -v=1: (6.141765483s)
--- PASS: TestAddons/serial/GCPAuth (15.08s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220511155445-84527
addons_test.go:132: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220511155445-84527: (17.602790736s)
addons_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220511155445-84527
addons_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220511155445-84527
--- PASS: TestAddons/StoppedEnableDisable (18.13s)

                                                
                                    
x
+
TestCertOptions (73.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220511165246-84527 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220511165246-84527 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (1m4.831830457s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220511165246-84527 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220511165246-84527 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220511165246-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220511165246-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220511165246-84527: (6.916449114s)
--- PASS: TestCertOptions (73.21s)

                                                
                                    
x
+
TestCertExpiration (267.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220511164853-84527 --memory=2048 --cert-expiration=3m --driver=docker 
E0511 16:48:57.807563   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:49:07.985516   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:49:28.469520   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220511164853-84527 --memory=2048 --cert-expiration=3m --driver=docker : (1m15.198712189s)
E0511 16:50:09.437102   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:51:31.358470   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:51:44.254367   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:52:01.155199   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220511164853-84527 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220511164853-84527 --memory=2048 --cert-expiration=8760h --driver=docker : (6.918836807s)
helpers_test.go:175: Cleaning up "cert-expiration-20220511164853-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220511164853-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220511164853-84527: (5.759374494s)
--- PASS: TestCertExpiration (267.88s)

                                                
                                    
x
+
TestDockerFlags (86.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220511164727-84527 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0511 16:47:35.332635   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:48:34.712359   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220511164727-84527 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (1m11.408447484s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220511164727-84527 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220511164727-84527 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220511164727-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220511164727-84527
E0511 16:48:47.536028   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:47.541762   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:47.551982   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:47.574354   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:47.624322   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:47.705830   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:47.874320   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:48.195384   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:48.835696   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:50.120527   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 16:48:52.680812   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220511164727-84527: (13.450218228s)
--- PASS: TestDockerFlags (86.18s)

                                                
                                    
x
+
TestForceSystemdFlag (340.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220511164706-84527 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220511164706-84527 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (5m23.919208996s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220511164706-84527 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220511164706-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220511164706-84527
E0511 16:52:35.268518   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220511164706-84527: (16.062021513s)
--- PASS: TestForceSystemdFlag (340.69s)

                                                
                                    
x
+
TestForceSystemdEnv (81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220511164545-84527 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:150: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220511164545-84527 --memory=2048 --alsologtostderr -v=5 --driver=docker : (1m6.482978922s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220511164545-84527 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220511164545-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220511164545-84527
E0511 16:47:01.218956   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220511164545-84527: (13.798211833s)
--- PASS: TestForceSystemdEnv (81.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.79s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0511 16:45:38.425659   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (7.79s)

                                                
                                    
x
+
TestErrorSpam/setup (72.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220511155831-84527 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220511155831-84527 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 --driver=docker : (1m12.93927874s)
error_spam_test.go:88: acceptable stderr: "! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.5."
--- PASS: TestErrorSpam/setup (72.94s)

                                                
                                    
x
+
TestErrorSpam/start (2.96s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 start --dry-run: (1.017521962s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 start --dry-run
--- PASS: TestErrorSpam/start (2.96s)

                                                
                                    
x
+
TestErrorSpam/status (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 status
--- PASS: TestErrorSpam/status (1.96s)

                                                
                                    
x
+
TestErrorSpam/pause (2.37s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 pause
--- PASS: TestErrorSpam/pause (2.37s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 unpause
--- PASS: TestErrorSpam/unpause (2.66s)

                                                
                                    
x
+
TestErrorSpam/stop (18.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 stop: (17.300975213s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220511155831-84527 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220511155831-84527 stop
--- PASS: TestErrorSpam/stop (18.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1784: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/test/nested/copy/84527/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (124.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2163: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
E0511 16:02:01.140503   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:01.149398   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:01.162276   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:01.185911   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:01.226434   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:01.309791   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:01.469932   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:01.790389   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:02.433151   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:03.713524   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:06.274023   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:11.400755   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:02:21.651352   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
functional_test.go:2163: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (2m4.171167172s)
--- PASS: TestFunctional/serial/StartWithProxy (124.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --alsologtostderr -v=8: (7.467763259s)
functional_test.go:658: soft start took 7.468374683s for "functional-20220511160019-84527" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-20220511160019-84527 get po -A
functional_test.go:691: (dbg) Done: kubectl --context functional-20220511160019-84527 get po -A: (1.842414122s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache add k8s.gcr.io/pause:3.1: (1.225595094s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache add k8s.gcr.io/pause:3.3: (1.886919669s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache add k8s.gcr.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache add k8s.gcr.io/pause:latest: (1.652407588s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220511160019-84527 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local182086573/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache add minikube-local-cache-test:functional-20220511160019-84527
functional_test.go:1084: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache add minikube-local-cache-test:functional-20220511160019-84527: (1.44316489s)
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache delete minikube-local-cache-test:functional-20220511160019-84527
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220511160019-84527
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (637.886242ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache reload
E0511 16:02:42.132870   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
functional_test.go:1153: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 cache reload: (1.223844345s)
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 kubectl -- --context functional-20220511160019-84527 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-20220511160019-84527 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.60s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.584666471s)
functional_test.go:756: restart took 32.58480658s for "functional-20220511160019-84527" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-20220511160019-84527 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 logs
functional_test.go:1231: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 logs: (3.784998125s)
--- PASS: TestFunctional/serial/LogsCmd (3.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd591386270/001/logs.txt
E0511 16:03:23.102621   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
functional_test.go:1245: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd591386270/001/logs.txt: (3.981510476s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220511160019-84527 config get cpus: exit status 14 (56.052016ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220511160019-84527 config get cpus: exit status 14 (56.62584ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220511160019-84527 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220511160019-84527 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 87792: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (791.633059ms)

                                                
                                                
-- stdout --
	* [functional-20220511160019-84527] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13639
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 16:04:36.594572   87661 out.go:296] Setting OutFile to fd 1 ...
	I0511 16:04:36.594778   87661 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 16:04:36.594784   87661 out.go:309] Setting ErrFile to fd 2...
	I0511 16:04:36.594788   87661 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 16:04:36.594904   87661 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 16:04:36.595187   87661 out.go:303] Setting JSON to false
	I0511 16:04:36.611196   87661 start.go:115] hostinfo: {"hostname":"37310.local","uptime":25451,"bootTime":1652284825,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0511 16:04:36.611292   87661 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 16:04:36.639235   87661 out.go:177] * [functional-20220511160019-84527] minikube v1.25.2 on Darwin 11.2.3
	I0511 16:04:36.665160   87661 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 16:04:36.691459   87661 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 16:04:36.738108   87661 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0511 16:04:36.764214   87661 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 16:04:36.790307   87661 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 16:04:36.817157   87661 config.go:178] Loaded profile config "functional-20220511160019-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 16:04:36.817869   87661 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 16:04:36.970664   87661 docker.go:137] docker version: linux-20.10.6
	I0511 16:04:36.970855   87661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 16:04:37.169591   87661 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-11 23:04:37.091563175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 16:04:37.196579   87661 out.go:177] * Using the docker driver based on existing profile
	I0511 16:04:37.222094   87661 start.go:284] selected driver: docker
	I0511 16:04:37.222112   87661 start.go:801] validating driver "docker" against &{Name:functional-20220511160019-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511160019-84527 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 16:04:37.222216   87661 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 16:04:37.250220   87661 out.go:177] 
	W0511 16:04:37.276201   87661 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0511 16:04:37.302197   87661 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220511160019-84527 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (814.023812ms)

                                                
                                                
-- stdout --
	* [functional-20220511160019-84527] minikube v1.25.2 sur Darwin 11.2.3
	  - MINIKUBE_LOCATION=13639
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 16:04:23.640786   87287 out.go:296] Setting OutFile to fd 1 ...
	I0511 16:04:23.640954   87287 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 16:04:23.640959   87287 out.go:309] Setting ErrFile to fd 2...
	I0511 16:04:23.640963   87287 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 16:04:23.641085   87287 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 16:04:23.641313   87287 out.go:303] Setting JSON to false
	I0511 16:04:23.656921   87287 start.go:115] hostinfo: {"hostname":"37310.local","uptime":25438,"bootTime":1652284825,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0511 16:04:23.657011   87287 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 16:04:23.682799   87287 out.go:177] * [functional-20220511160019-84527] minikube v1.25.2 sur Darwin 11.2.3
	I0511 16:04:23.729650   87287 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 16:04:23.776651   87287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	I0511 16:04:23.823418   87287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0511 16:04:23.870584   87287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 16:04:23.917637   87287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	I0511 16:04:23.944056   87287 config.go:178] Loaded profile config "functional-20220511160019-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 16:04:23.944422   87287 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 16:04:24.042025   87287 docker.go:137] docker version: linux-20.10.6
	I0511 16:04:24.042196   87287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 16:04:24.227654   87287 info.go:265] docker info: {ID:RQDQ:HCOB:T3HU:YQ6G:4CPW:M2H3:E64P:XHRS:32BB:YAUK:A452:DSC2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-11 23:04:24.169239624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0511 16:04:24.254335   87287 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0511 16:04:24.280179   87287 start.go:284] selected driver: docker
	I0511 16:04:24.280199   87287 start.go:801] validating driver "docker" against &{Name:functional-20220511160019-84527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511160019-84527 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 16:04:24.280334   87287 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 16:04:24.308154   87287 out.go:177] 
	W0511 16:04:24.334152   87287 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0511 16:04:24.381136   87287 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:867: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (16.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1435: (dbg) Run:  kubectl --context functional-20220511160019-84527 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-20220511160019-84527 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-6v66s" [0c75e75e-07d8-45d9-bc96-828db7313a0f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-54fbb85-6v66s" [0c75e75e-07d8-45d9-bc96-828db7313a0f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.010441479s
functional_test.go:1451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1451: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 service list: (1.289832555s)
functional_test.go:1465: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1465: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 service --namespace=default --https --url hello-node: (2.235946038s)
functional_test.go:1478: found endpoint: https://127.0.0.1:55631
functional_test.go:1493: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1493: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 service hello-node --url --format={{.IP}}: (2.354098781s)
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1507: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 service hello-node --url: (2.466829658s)
functional_test.go:1513: found endpoint for hello-node: http://127.0.0.1:55975
--- PASS: TestFunctional/parallel/ServiceCmd (16.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1622: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1634: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [5e26c7af-f47c-4e48-ac2b-04062e81108e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.016895873s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220511160019-84527 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220511160019-84527 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220511160019-84527 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220511160019-84527 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [bfaae68c-fa52-4a9f-a6a0-47b820b809b2] Pending
helpers_test.go:342: "sp-pod" [bfaae68c-fa52-4a9f-a6a0-47b820b809b2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [bfaae68c-fa52-4a9f-a6a0-47b820b809b2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.011561308s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220511160019-84527 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220511160019-84527 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220511160019-84527 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [b6975ec7-52cd-449d-91a6-1b3b192d3c0d] Pending
helpers_test.go:342: "sp-pod" [b6975ec7-52cd-449d-91a6-1b3b192d3c0d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [b6975ec7-52cd-449d-91a6-1b3b192d3c0d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.013658092s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220511160019-84527 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1657: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1674: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh -n functional-20220511160019-84527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 cp functional-20220511160019-84527:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd2883463680/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh -n functional-20220511160019-84527 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1722: (dbg) Run:  kubectl --context functional-20220511160019-84527 replace --force -f testdata/mysql.yaml
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-t7sbm" [9980d194-4c26-4b25-aef8-9bf22687c0d1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-t7sbm" [9980d194-4c26-4b25-aef8-9bf22687c0d1] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.018965623s
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511160019-84527 exec mysql-b87c45988-t7sbm -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511160019-84527 exec mysql-b87c45988-t7sbm -- mysql -ppassword -e "show databases;": exit status 1 (150.126043ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511160019-84527 exec mysql-b87c45988-t7sbm -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511160019-84527 exec mysql-b87c45988-t7sbm -- mysql -ppassword -e "show databases;": exit status 1 (135.696784ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511160019-84527 exec mysql-b87c45988-t7sbm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1858: Checking for existence of /etc/test/nested/copy/84527/hosts within VM
functional_test.go:1860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo cat /etc/test/nested/copy/84527/hosts"
functional_test.go:1865: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/84527.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo cat /etc/ssl/certs/84527.pem"
functional_test.go:1901: Checking for existence of /usr/share/ca-certificates/84527.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo cat /usr/share/ca-certificates/84527.pem"
functional_test.go:1901: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1902: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1928: Checking for existence of /etc/ssl/certs/845272.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo cat /etc/ssl/certs/845272.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /usr/share/ca-certificates/845272.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo cat /usr/share/ca-certificates/845272.pem"
functional_test.go:1928: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1929: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220511160019-84527 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo systemctl is-active crio": exit status 1 (641.965292ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2185: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 version -o=json --components
functional_test.go:2199: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 version -o=json --components: (1.536152585s)
--- PASS: TestFunctional/parallel/Version/components (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220511160019-84527
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220511160019-84527
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-20220511160019-84527 | 6b85727315891 | 30B    |
| k8s.gcr.io/kube-proxy                       | v1.23.5                         | 3c53fa8541f95 | 112MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.5                         | 884d49d6d8c9f | 53.5MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.5                         | b0c9e5e4dbb14 | 125MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-20220511160019-84527 | 9f0f2650a5948 | 1.24MB |
| docker.io/library/nginx                     | alpine                          | 51696c87e77e4 | 23.4MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.5                         | 3fc1d62d65872 | 135MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                         | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>                          | 7801cfc6d5c07 | 34.4MB |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest                          | 7425d3a7c478e | 142MB  |
| docker.io/library/mysql                     | 5.7                             | a3d35804fa376 | 462MB  |
| gcr.io/google-containers/addon-resizer      | functional-20220511160019-84527 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|---------------------------------|---------------|--------|
2022/05/11 16:04:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls --format json:
[{"id":"51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.5"],"size":"112000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"34400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"9f0f2650a594888cd38826336eb9594a9cdb587e08d631e5c9784e4dc2893b26","repoDi
gests":[],"repoTags":["docker.io/localhost/my-image:functional-20220511160019-84527"],"size":"1240000"},{"id":"a3d35804fa376a141b9a9dad8f5534c3179f4c328d6efc67c5c5145d257c291a","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.5"],"size":"125000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"7425d3a7c478efbeb75f0937060117343a9a510f72f5f7ad9f14b1501a36940c","repoDigests":[],"repoTags":["docker.io/library/ngin
x:latest"],"size":"142000000"},{"id":"3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.5"],"size":"135000000"},{"id":"884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.5"],"size":"53500000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220511160019-84527"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"6b857273158913aff6c8960c84c5311dade93d855c5258405dd08f1b2d3be520","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220511160019-84527"]
,"size":"30"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls --format yaml:
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.5
size: "135000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.5
size: "125000000"
- id: 884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.5
size: "53500000"
- id: 3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.5
size: "112000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "34400000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220511160019-84527
size: "32900000"
- id: 7425d3a7c478efbeb75f0937060117343a9a510f72f5f7ad9f14b1501a36940c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: a3d35804fa376a141b9a9dad8f5534c3179f4c328d6efc67c5c5145d257c291a
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 6b857273158913aff6c8960c84c5311dade93d855c5258405dd08f1b2d3be520
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220511160019-84527
size: "30"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh pgrep buildkitd: exit status 1 (618.969423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image build -t localhost/my-image:functional-20220511160019-84527 testdata/build
E0511 16:04:45.032055   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image build -t localhost/my-image:functional-20220511160019-84527 testdata/build: (2.986328343s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image build -t localhost/my-image:functional-20220511160019-84527 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 6dfc14be5b43
Removing intermediate container 6dfc14be5b43
---> a1bf2c944ede
Step 3/3 : ADD content.txt /
---> 9f0f2650a594
Successfully built 9f0f2650a594
Successfully tagged localhost/my-image:functional-20220511160019-84527
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.953460998s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220511160019-84527
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220511160019-84527 docker-env) && out/minikube-darwin-amd64 status -p functional-20220511160019-84527"
functional_test.go:494: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220511160019-84527 docker-env) && out/minikube-darwin-amd64 status -p functional-20220511160019-84527": (1.613660368s)
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220511160019-84527 docker-env) && docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:517: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220511160019-84527 docker-env) && docker images": (1.188273571s)
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511160019-84527

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511160019-84527: (3.301872218s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2048: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2048: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2048: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511160019-84527

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511160019-84527: (2.459564586s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220511160019-84527
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511160019-84527
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511160019-84527: (4.466599963s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image save gcr.io/google-containers/addon-resizer:functional-20220511160019-84527 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image save gcr.io/google-containers/addon-resizer:functional-20220511160019-84527 /Users/jenkins/workspace/addon-resizer-save.tar: (2.165204442s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image rm gcr.io/google-containers/addon-resizer:functional-20220511160019-84527
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image load /Users/jenkins/workspace/addon-resizer-save.tar: (2.340814065s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220511160019-84527
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220511160019-84527

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220511160019-84527 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220511160019-84527: (2.610056015s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220511160019-84527
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1313: Took "680.943756ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1327: Took "83.206731ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1364: Took "725.197997ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1377: Took "129.123256ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220511160019-84527 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220511160019-84527 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [c481d59b-8c9c-4714-943a-49e6d6d94332] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [c481d59b-8c9c-4714-943a-49e6d6d94332] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx-svc" [c481d59b-8c9c-4714-943a-49e6d6d94332] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.016715435s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511160019-84527 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220511160019-84527 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 87259: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220511160019-84527 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1792726548/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1652310264411984000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1792726548/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1652310264411984000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1792726548/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1652310264411984000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1792726548/001/test-1652310264411984000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (656.624067ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 11 23:04 created-by-test
-rw-r--r-- 1 docker docker 24 May 11 23:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 11 23:04 test-1652310264411984000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh cat /mount-9p/test-1652310264411984000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220511160019-84527 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [abe88105-918e-4daf-a3b3-55cb50f30bda] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [abe88105-918e-4daf-a3b3-55cb50f30bda] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [abe88105-918e-4daf-a3b3-55cb50f30bda] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.01513494s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220511160019-84527 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220511160019-84527 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1792726548/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220511160019-84527 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2226810330/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (663.05327ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220511160019-84527 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2226810330/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh "sudo umount -f /mount-9p": exit status 1 (605.899701ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220511160019-84527 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220511160019-84527 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2226810330/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.41s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.29s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220511160019-84527
--- PASS: TestFunctional/delete_addon-resizer_images (0.29s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220511160019-84527
--- PASS: TestFunctional/delete_my-image_image (0.13s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220511160019-84527
--- PASS: TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (132.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220511160505-84527 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0511 16:07:01.148744   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220511160505-84527 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : (2m12.936859526s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (132.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220511160505-84527 addons enable ingress --alsologtostderr -v=5
E0511 16:07:28.885461   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220511160505-84527 addons enable ingress --alsologtostderr -v=5: (16.511251995s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220511160505-84527 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (123.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220511160823-84527 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0511 16:08:34.644190   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:34.649305   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:34.660102   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:34.680755   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:34.728140   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:34.808247   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:34.978318   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:35.307728   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:35.957840   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:37.238877   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:39.807960   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:44.928303   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:08:55.168748   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:09:15.652192   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:09:56.613587   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220511160823-84527 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (2m3.42695097s)
--- PASS: TestJSONOutput/start/Command (123.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220511160823-84527 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220511160823-84527 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (17.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220511160823-84527 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220511160823-84527 --output=json --user=testUser: (17.352952144s)
--- PASS: TestJSONOutput/stop/Command (17.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.08s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220511161052-84527 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220511161052-84527 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (383.979507ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3a6be703-e16a-4a43-afcc-845ba5f77db0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220511161052-84527] minikube v1.25.2 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb9383c4-4ca5-405a-a27c-50a9ae8de82d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13639"}}
	{"specversion":"1.0","id":"3755baea-0f49-48e7-bac5-6d1e4de4d5b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig"}}
	{"specversion":"1.0","id":"d75cb7e7-9e5d-49c6-b75d-67cbb16ddbfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"458f95c7-81d8-46c3-ab37-aa74210e0e0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a1f0b946-6ef4-4bf7-ae56-e30a0768c5d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube"}}
	{"specversion":"1.0","id":"b8d6edf6-6138-4e4d-96ba-9bf5c33ae426","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220511161052-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220511161052-84527
--- PASS: TestErrorJSONOutput (1.08s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (87.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220511161053-84527 --network=
E0511 16:11:18.539318   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:12:01.165557   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220511161053-84527 --network=: (1m13.83546328s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220511161053-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220511161053-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220511161053-84527: (13.214037904s)
--- PASS: TestKicCustomNetwork/create_custom_network (87.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (75.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220511161221-84527 --network=bridge
E0511 16:12:35.267363   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:35.273027   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:35.283852   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:35.306552   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:35.349944   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:35.434970   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:35.595088   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:35.916514   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:36.566471   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:37.847366   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:40.416885   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:45.538433   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:12:55.779315   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:13:16.268127   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220511161221-84527 --network=bridge: (1m5.579812477s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220511161221-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220511161221-84527
E0511 16:13:34.652198   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220511161221-84527: (9.466757128s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (75.16s)

                                                
                                    
x
+
TestKicExistingNetwork (86.76s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220511161341-84527 --network=existing-network
E0511 16:13:57.236871   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:14:02.387661   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220511161341-84527 --network=existing-network: (1m7.624927064s)
helpers_test.go:175: Cleaning up "existing-network-20220511161341-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220511161341-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220511161341-84527: (13.590112669s)
--- PASS: TestKicExistingNetwork (86.76s)

                                                
                                    
x
+
TestKicCustomSubnet (86.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220511161502-84527 --subnet=192.168.60.0/24
E0511 16:15:19.160175   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220511161502-84527 --subnet=192.168.60.0/24: (1m13.558205146s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220511161502-84527 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220511161502-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220511161502-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220511161502-84527: (12.861679173s)
--- PASS: TestKicCustomSubnet (86.54s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (48.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220511161629-84527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
E0511 16:17:01.167464   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220511161629-84527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (47.475056073s)
--- PASS: TestMountStart/serial/StartWithMountFirst (48.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.62s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220511161629-84527 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (47.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220511161629-84527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
E0511 16:17:35.278390   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:18:03.009710   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220511161629-84527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (46.974015988s)
--- PASS: TestMountStart/serial/StartWithMountSecond (47.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220511161629-84527 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.68s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (11.78s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220511161629-84527 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220511161629-84527 --alsologtostderr -v=5: (11.778090077s)
--- PASS: TestMountStart/serial/DeleteFirst (11.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220511161629-84527 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.61s)

                                                
                                    
x
+
TestMountStart/serial/Stop (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220511161629-84527
E0511 16:18:24.268556   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220511161629-84527: (7.487287115s)
--- PASS: TestMountStart/serial/Stop (7.49s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (30.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220511161629-84527
E0511 16:18:34.657904   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220511161629-84527: (29.306100439s)
--- PASS: TestMountStart/serial/RestartStopped (30.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220511161629-84527 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (231.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220511161910-84527 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0511 16:22:01.192967   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:22:35.308432   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220511161910-84527 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (3m50.761494235s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr: (1.10562873s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (231.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.994786134s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- rollout status deployment/busybox: (2.738010638s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-5mtvx -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-x6xmg -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-5mtvx -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-x6xmg -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-5mtvx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-x6xmg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-5mtvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-5mtvx -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-x6xmg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220511161910-84527 -- exec busybox-7978565885-x6xmg -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (111.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220511161910-84527 -v 3 --alsologtostderr
E0511 16:23:34.684323   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:24:57.784236   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220511161910-84527 -v 3 --alsologtostderr: (1m50.026487544s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr: (1.569757021s)
--- PASS: TestMultiNode/serial/AddNode (111.60s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (23.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --output json --alsologtostderr: (1.621462918s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp testdata/cp-test.txt multinode-20220511161910-84527:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile964987628/001/cp-test_multinode-20220511161910-84527.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527:/home/docker/cp-test.txt multinode-20220511161910-84527-m02:/home/docker/cp-test_multinode-20220511161910-84527_multinode-20220511161910-84527-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527:/home/docker/cp-test.txt multinode-20220511161910-84527-m02:/home/docker/cp-test_multinode-20220511161910-84527_multinode-20220511161910-84527-m02.txt: (1.00662615s)
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m02 "sudo cat /home/docker/cp-test_multinode-20220511161910-84527_multinode-20220511161910-84527-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527:/home/docker/cp-test.txt multinode-20220511161910-84527-m03:/home/docker/cp-test_multinode-20220511161910-84527_multinode-20220511161910-84527-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m03 "sudo cat /home/docker/cp-test_multinode-20220511161910-84527_multinode-20220511161910-84527-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp testdata/cp-test.txt multinode-20220511161910-84527-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile964987628/001/cp-test_multinode-20220511161910-84527-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527-m02:/home/docker/cp-test.txt multinode-20220511161910-84527:/home/docker/cp-test_multinode-20220511161910-84527-m02_multinode-20220511161910-84527.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527 "sudo cat /home/docker/cp-test_multinode-20220511161910-84527-m02_multinode-20220511161910-84527.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527-m02:/home/docker/cp-test.txt multinode-20220511161910-84527-m03:/home/docker/cp-test_multinode-20220511161910-84527-m02_multinode-20220511161910-84527-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m03 "sudo cat /home/docker/cp-test_multinode-20220511161910-84527-m02_multinode-20220511161910-84527-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp testdata/cp-test.txt multinode-20220511161910-84527-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile964987628/001/cp-test_multinode-20220511161910-84527-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527-m03:/home/docker/cp-test.txt multinode-20220511161910-84527:/home/docker/cp-test_multinode-20220511161910-84527-m03_multinode-20220511161910-84527.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527 "sudo cat /home/docker/cp-test_multinode-20220511161910-84527-m03_multinode-20220511161910-84527.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 cp multinode-20220511161910-84527-m03:/home/docker/cp-test.txt multinode-20220511161910-84527-m02:/home/docker/cp-test_multinode-20220511161910-84527-m03_multinode-20220511161910-84527-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 ssh -n multinode-20220511161910-84527-m02 "sudo cat /home/docker/cp-test_multinode-20220511161910-84527-m03_multinode-20220511161910-84527-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (23.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (11.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 node stop m03: (9.046337164s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status: exit status 7 (1.24717075s)

                                                
                                                
-- stdout --
	multinode-20220511161910-84527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220511161910-84527-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220511161910-84527-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr: exit status 7 (1.245114362s)

                                                
                                                
-- stdout --
	multinode-20220511161910-84527
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220511161910-84527-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220511161910-84527-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 16:25:35.449984   91855 out.go:296] Setting OutFile to fd 1 ...
	I0511 16:25:35.450124   91855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 16:25:35.450129   91855 out.go:309] Setting ErrFile to fd 2...
	I0511 16:25:35.450133   91855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 16:25:35.450229   91855 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 16:25:35.450408   91855 out.go:303] Setting JSON to false
	I0511 16:25:35.450423   91855 mustload.go:65] Loading cluster: multinode-20220511161910-84527
	I0511 16:25:35.450706   91855 config.go:178] Loaded profile config "multinode-20220511161910-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 16:25:35.450718   91855 status.go:253] checking status of multinode-20220511161910-84527 ...
	I0511 16:25:35.451113   91855 cli_runner.go:164] Run: docker container inspect multinode-20220511161910-84527 --format={{.State.Status}}
	I0511 16:25:35.571298   91855 status.go:328] multinode-20220511161910-84527 host status = "Running" (err=<nil>)
	I0511 16:25:35.571327   91855 host.go:66] Checking if "multinode-20220511161910-84527" exists ...
	I0511 16:25:35.571647   91855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220511161910-84527
	I0511 16:25:35.692954   91855 host.go:66] Checking if "multinode-20220511161910-84527" exists ...
	I0511 16:25:35.693276   91855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 16:25:35.693346   91855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220511161910-84527
	I0511 16:25:35.814646   91855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61084 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/multinode-20220511161910-84527/id_rsa Username:docker}
	I0511 16:25:35.896209   91855 ssh_runner.go:195] Run: systemctl --version
	I0511 16:25:35.900832   91855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 16:25:35.909973   91855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220511161910-84527
	I0511 16:25:36.034311   91855 kubeconfig.go:92] found "multinode-20220511161910-84527" server: "https://127.0.0.1:61088"
	I0511 16:25:36.034337   91855 api_server.go:165] Checking apiserver status ...
	I0511 16:25:36.034384   91855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0511 16:25:36.044556   91855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1681/cgroup
	I0511 16:25:36.052695   91855 api_server.go:181] apiserver freezer: "7:freezer:/docker/3a658f4863d051d29133ce4142d7c579643d7130a018b166fb66ca78979a9c26/kubepods/burstable/podf8077217ef0d2502ed007787fe3a5494/79a296ef8d43ed17ddbee7d23925e135afc9e9409cdc8375a409f7c9d14e36fe"
	I0511 16:25:36.052762   91855 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3a658f4863d051d29133ce4142d7c579643d7130a018b166fb66ca78979a9c26/kubepods/burstable/podf8077217ef0d2502ed007787fe3a5494/79a296ef8d43ed17ddbee7d23925e135afc9e9409cdc8375a409f7c9d14e36fe/freezer.state
	I0511 16:25:36.059868   91855 api_server.go:203] freezer state: "THAWED"
	I0511 16:25:36.059883   91855 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61088/healthz ...
	I0511 16:25:36.065777   91855 api_server.go:266] https://127.0.0.1:61088/healthz returned 200:
	ok
	I0511 16:25:36.065788   91855 status.go:419] multinode-20220511161910-84527 apiserver status = Running (err=<nil>)
	I0511 16:25:36.065808   91855 status.go:255] multinode-20220511161910-84527 status: &{Name:multinode-20220511161910-84527 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0511 16:25:36.065824   91855 status.go:253] checking status of multinode-20220511161910-84527-m02 ...
	I0511 16:25:36.066092   91855 cli_runner.go:164] Run: docker container inspect multinode-20220511161910-84527-m02 --format={{.State.Status}}
	I0511 16:25:36.187190   91855 status.go:328] multinode-20220511161910-84527-m02 host status = "Running" (err=<nil>)
	I0511 16:25:36.187210   91855 host.go:66] Checking if "multinode-20220511161910-84527-m02" exists ...
	I0511 16:25:36.187481   91855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220511161910-84527-m02
	I0511 16:25:36.311641   91855 host.go:66] Checking if "multinode-20220511161910-84527-m02" exists ...
	I0511 16:25:36.311907   91855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0511 16:25:36.311965   91855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220511161910-84527-m02
	I0511 16:25:36.429307   91855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61421 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/multinode-20220511161910-84527-m02/id_rsa Username:docker}
	I0511 16:25:36.510924   91855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0511 16:25:36.520110   91855 status.go:255] multinode-20220511161910-84527-m02 status: &{Name:multinode-20220511161910-84527-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0511 16:25:36.520133   91855 status.go:253] checking status of multinode-20220511161910-84527-m03 ...
	I0511 16:25:36.520412   91855 cli_runner.go:164] Run: docker container inspect multinode-20220511161910-84527-m03 --format={{.State.Status}}
	I0511 16:25:36.641145   91855 status.go:328] multinode-20220511161910-84527-m03 host status = "Stopped" (err=<nil>)
	I0511 16:25:36.641164   91855 status.go:341] host is not running, skipping remaining checks
	I0511 16:25:36.641169   91855 status.go:255] multinode-20220511161910-84527-m03 status: &{Name:multinode-20220511161910-84527-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (11.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (50.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 node start m03 --alsologtostderr: (48.982348722s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status: (1.597450726s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (50.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (250.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220511161910-84527
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220511161910-84527
E0511 16:27:01.201269   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220511161910-84527: (40.491497239s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220511161910-84527 --wait=true -v=8 --alsologtostderr
E0511 16:27:35.316395   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:28:34.694599   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:28:58.406223   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220511161910-84527 --wait=true -v=8 --alsologtostderr: (3m29.569803232s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220511161910-84527
--- PASS: TestMultiNode/serial/RestartKeepsNodes (250.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (17.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 node delete m03: (14.867017344s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr: (1.118875534s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.768045571s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (17.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (36.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 stop: (35.578532385s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status: exit status 7 (290.190897ms)

                                                
                                                
-- stdout --
	multinode-20220511161910-84527
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220511161910-84527-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr: exit status 7 (290.948315ms)

                                                
                                                
-- stdout --
	multinode-20220511161910-84527
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220511161910-84527-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 16:31:31.426207   92739 out.go:296] Setting OutFile to fd 1 ...
	I0511 16:31:31.426405   92739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 16:31:31.426411   92739 out.go:309] Setting ErrFile to fd 2...
	I0511 16:31:31.426415   92739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 16:31:31.426517   92739 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
	I0511 16:31:31.426700   92739 out.go:303] Setting JSON to false
	I0511 16:31:31.426716   92739 mustload.go:65] Loading cluster: multinode-20220511161910-84527
	I0511 16:31:31.427053   92739 config.go:178] Loaded profile config "multinode-20220511161910-84527": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 16:31:31.427063   92739 status.go:253] checking status of multinode-20220511161910-84527 ...
	I0511 16:31:31.427488   92739 cli_runner.go:164] Run: docker container inspect multinode-20220511161910-84527 --format={{.State.Status}}
	I0511 16:31:31.548707   92739 status.go:328] multinode-20220511161910-84527 host status = "Stopped" (err=<nil>)
	I0511 16:31:31.548738   92739 status.go:341] host is not running, skipping remaining checks
	I0511 16:31:31.548746   92739 status.go:255] multinode-20220511161910-84527 status: &{Name:multinode-20220511161910-84527 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0511 16:31:31.548788   92739 status.go:253] checking status of multinode-20220511161910-84527-m02 ...
	I0511 16:31:31.549105   92739 cli_runner.go:164] Run: docker container inspect multinode-20220511161910-84527-m02 --format={{.State.Status}}
	I0511 16:31:31.662362   92739 status.go:328] multinode-20220511161910-84527-m02 host status = "Stopped" (err=<nil>)
	I0511 16:31:31.662400   92739 status.go:341] host is not running, skipping remaining checks
	I0511 16:31:31.662406   92739 status.go:255] multinode-20220511161910-84527-m02 status: &{Name:multinode-20220511161910-84527-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (36.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (147.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220511161910-84527 --wait=true -v=8 --alsologtostderr --driver=docker 
E0511 16:32:01.210268   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:32:35.319899   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 16:33:34.701238   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220511161910-84527 --wait=true -v=8 --alsologtostderr --driver=docker : (2m24.494474738s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220511161910-84527 status --alsologtostderr: (1.148454605s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.739836778s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (147.53s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (100.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220511161910-84527
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220511161910-84527-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220511161910-84527-m02 --driver=docker : exit status 14 (510.577249ms)

                                                
                                                
-- stdout --
	* [multinode-20220511161910-84527-m02] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13639
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220511161910-84527-m02' is duplicated with machine name 'multinode-20220511161910-84527-m02' in profile 'multinode-20220511161910-84527'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220511161910-84527-m03 --driver=docker 
E0511 16:35:04.299160   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220511161910-84527-m03 --driver=docker : (1m24.518461549s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220511161910-84527
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220511161910-84527: exit status 80 (696.706717ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220511161910-84527
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220511161910-84527-m03 already exists in multinode-20220511161910-84527-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220511161910-84527-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220511161910-84527-m03: (15.154667395s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (100.94s)

                                                
                                    
x
+
TestPreload (205.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220511163603-84527 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0511 16:37:01.205489   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:37:35.314646   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220511163603-84527 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (2m20.560392963s)
preload_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220511163603-84527 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-20220511163603-84527 -- docker pull gcr.io/k8s-minikube/busybox: (2.006951942s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220511163603-84527 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
E0511 16:38:34.693080   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220511163603-84527 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (48.274942845s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220511163603-84527 -- docker images
helpers_test.go:175: Cleaning up "test-preload-20220511163603-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220511163603-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220511163603-84527: (13.918849612s)
--- PASS: TestPreload (205.45s)

                                                
                                    
x
+
TestScheduledStopUnix (153.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220511163928-84527 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220511163928-84527 --memory=2048 --driver=docker : (1m14.452934727s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220511163928-84527 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220511163928-84527 -n scheduled-stop-20220511163928-84527
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220511163928-84527 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220511163928-84527 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220511163928-84527 -n scheduled-stop-20220511163928-84527
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220511163928-84527
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220511163928-84527 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0511 16:41:37.798507   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220511163928-84527
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220511163928-84527: exit status 7 (167.490133ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220511163928-84527
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220511163928-84527 -n scheduled-stop-20220511163928-84527
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220511163928-84527 -n scheduled-stop-20220511163928-84527: exit status 7 (165.389253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220511163928-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220511163928-84527
E0511 16:42:01.212402   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220511163928-84527: (6.496077365s)
--- PASS: TestScheduledStopUnix (153.68s)

                                                
                                    
x
+
TestSkaffold (128.58s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:56: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2411389053 version
skaffold_test.go:56: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2411389053 version: (1.145783333s)
skaffold_test.go:60: skaffold version: v1.38.0
skaffold_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220511164202-84527 --memory=2600 --driver=docker 
E0511 16:42:35.322796   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
skaffold_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220511164202-84527 --memory=2600 --driver=docker : (1m15.827224814s)
skaffold_test.go:83: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:107: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2411389053 run --minikube-profile skaffold-20220511164202-84527 --kube-context skaffold-20220511164202-84527 --status-check=true --port-forward=false --interactive=false
E0511 16:43:34.703331   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
skaffold_test.go:107: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2411389053 run --minikube-profile skaffold-20220511164202-84527 --kube-context skaffold-20220511164202-84527 --status-check=true --port-forward=false --interactive=false: (26.937922151s)
skaffold_test.go:113: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-797cdb8665-xg9zs" [a9816eed-1e1c-47a8-909c-b3d4f2ef8cb6] Running
skaffold_test.go:113: (dbg) TestSkaffold: app=leeroy-app healthy within 5.022945486s
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-789d86844-mwcml" [eecbf009-699e-4fe2-8b1d-c9ef11ad0ab0] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-web healthy within 5.017209321s
helpers_test.go:175: Cleaning up "skaffold-20220511164202-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220511164202-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220511164202-84527: (13.631412493s)
--- PASS: TestSkaffold (128.58s)

                                                
                                    
x
+
TestInsufficientStorage (64.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220511164411-84527 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220511164411-84527 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (50.959903611s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f8d2080b-4cad-4524-9805-816dc18e9f24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220511164411-84527] minikube v1.25.2 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"22e171ab-791a-4060-bada-6f627f10172b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13639"}}
	{"specversion":"1.0","id":"16996be2-a906-40a6-8531-727456b2ac62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig"}}
	{"specversion":"1.0","id":"5250c095-af7b-4e9b-a5b0-9c9ed3519fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"107c01f9-34c3-4280-be78-f19ed6b218ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"33d2e5ad-5248-41a7-a2fd-6deb40d64372","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube"}}
	{"specversion":"1.0","id":"b2022082-6e17-438f-b41b-8d9938b67417","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"82f360d8-2067-400c-b94c-6a668e14c324","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"810050e4-e906-451a-a2a0-0cb1c39c818b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"10ac622a-3560-4c15-9238-7ee9ecc05acc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"7b3089ef-a16f-4764-ae4c-b84281ecf3b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220511164411-84527 in cluster insufficient-storage-20220511164411-84527","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5236a0ca-25c9-4ed1-9abb-8ef68966d227","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed6508fd-cdfe-4bf9-ab51-a31a7aed91c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ea1c9a40-57f2-436e-b4ae-7293e757d69c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220511164411-84527 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220511164411-84527 --output=json --layout=cluster: exit status 7 (604.815651ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220511164411-84527","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220511164411-84527","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0511 16:45:02.759344   94783 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220511164411-84527" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220511164411-84527 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220511164411-84527 --output=json --layout=cluster: exit status 7 (612.016137ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220511164411-84527","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220511164411-84527","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0511 16:45:03.372191   94801 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220511164411-84527" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	E0511 16:45:03.380727   94801 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/insufficient-storage-20220511164411-84527/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220511164411-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220511164411-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220511164411-84527: (11.93036733s)
--- PASS: TestInsufficientStorage (64.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (135.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1529420907.exe start -p running-upgrade-20220511165656-84527 --memory=2200 --vm-driver=docker 
E0511 16:57:01.157954   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 16:57:35.276195   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1529420907.exe start -p running-upgrade-20220511165656-84527 --memory=2200 --vm-driver=docker : (1m36.681305564s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-20220511165656-84527 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0511 16:58:34.648618   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:58:47.481367   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
version_upgrade_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-20220511165656-84527 --memory=2200 --alsologtostderr -v=1 --driver=docker : (29.233940126s)
helpers_test.go:175: Cleaning up "running-upgrade-20220511165656-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220511165656-84527

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220511165656-84527: (9.063580302s)
--- PASS: TestRunningBinaryUpgrade (135.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (176.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220511165359-84527 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0511 16:54:15.209259   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220511165359-84527 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : (1m20.679782134s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220511165359-84527
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220511165359-84527: (8.772559955s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220511165359-84527 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220511165359-84527 status --format={{.Host}}: exit status 7 (168.556468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220511165359-84527 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220511165359-84527 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker : (54.7074954s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220511165359-84527 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220511165359-84527 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220511165359-84527 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (680.014613ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220511165359-84527] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13639
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220511165359-84527
	    minikube start -p kubernetes-upgrade-20220511165359-84527 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220511165359-845272 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220511165359-84527 --kubernetes-version=v1.23.6-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220511165359-84527 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220511165359-84527 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker : (16.309926965s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220511165359-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220511165359-84527

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220511165359-84527: (15.154251208s)
--- PASS: TestKubernetesUpgrade (176.62s)

                                                
                                    
x
+
TestMissingContainerUpgrade (199.09s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3560234622.exe start -p missing-upgrade-20220511165321-84527 --memory=2200 --driver=docker 
E0511 16:53:34.645961   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 16:53:47.474930   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3560234622.exe start -p missing-upgrade-20220511165321-84527 --memory=2200 --driver=docker : (1m12.178065116s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220511165321-84527
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220511165321-84527: (11.596113085s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220511165321-84527
version_upgrade_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-20220511165321-84527 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-20220511165321-84527 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m42.775694859s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220511165321-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220511165321-84527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220511165321-84527: (11.580266583s)
--- PASS: TestMissingContainerUpgrade (199.09s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.92s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.25.2 on darwin
- MINIKUBE_LOCATION=13639
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2763614199/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2763614199/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2763614199/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2763614199/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.92s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (12.13s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.25.2 on darwin
- MINIKUBE_LOCATION=13639
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current501127074/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current501127074/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current501127074/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current501127074/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (12.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (141.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3882558300.exe start -p stopped-upgrade-20220511165640-84527 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3882558300.exe start -p stopped-upgrade-20220511165640-84527 --memory=2200 --vm-driver=docker : (1m13.278062365s)
version_upgrade_test.go:199: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3882558300.exe -p stopped-upgrade-20220511165640-84527 stop
version_upgrade_test.go:199: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3882558300.exe -p stopped-upgrade-20220511165640-84527 stop: (15.051521627s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-20220511165640-84527 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0511 16:58:17.747594   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-20220511165640-84527 --memory=2200 --alsologtostderr -v=1 --driver=docker : (52.966860134s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (141.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (4.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220511165640-84527
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220511165640-84527: (4.316392841s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (4.32s)

                                                
                                    
x
+
TestPause/serial/Start (107.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220511165912-84527 --memory=2048 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220511165912-84527 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m47.251055351s)
--- PASS: TestPause/serial/Start (107.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (519.251898ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220511165917-84527] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13639
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (53.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --driver=docker : (53.183673655s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220511165917-84527 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (53.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --no-kubernetes --driver=docker : (14.581089115s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220511165917-84527 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220511165917-84527 status -o json: exit status 2 (639.86429ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220511165917-84527","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220511165917-84527
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220511165917-84527: (7.624577238s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (42.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --no-kubernetes --driver=docker : (42.780623227s)
--- PASS: TestNoKubernetes/serial/Start (42.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220511165912-84527 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220511165912-84527 --alsologtostderr -v=1 --driver=docker : (7.598546826s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.61s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220511165912-84527 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220511165912-84527 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220511165912-84527 --output=json --layout=cluster: exit status 2 (640.579616ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220511165912-84527","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220511165912-84527","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.64s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20220511165912-84527 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220511165912-84527 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20220511165912-84527 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-20220511165912-84527 --alsologtostderr -v=5: (5.394932058s)
--- PASS: TestPause/serial/DeletePaused (5.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220511165912-84527
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220511165912-84527: exit status 1 (125.664036ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220511165912-84527

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220511165917-84527 "sudo systemctl is-active --quiet service kubelet"

                                                
                                                
=== CONT  TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220511165917-84527 "sudo systemctl is-active --quiet service kubelet": exit status 1 (643.096739ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (109.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220511164515-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220511164515-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (1m49.646360723s)
--- PASS: TestNetworkPlugins/group/auto/Start (109.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (8.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220511165917-84527
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220511165917-84527: (8.732328391s)
--- PASS: TestNoKubernetes/serial/Stop (8.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220511165917-84527 --driver=docker : (20.298605048s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220511165917-84527 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220511165917-84527 "sudo systemctl is-active --quiet service kubelet": exit status 1 (601.731092ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (106.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E0511 17:02:01.160201   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 17:02:18.378005   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 17:02:35.272849   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m46.102246875s)
--- PASS: TestNetworkPlugins/group/false/Start (106.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220511164515-84527 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220511164515-84527 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220511164515-84527 replace --force -f testdata/netcat-deployment.yaml: (1.968425573s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-6lrpw" [bc812084-1c83-4550-9b4d-7282fdd91698] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-6lrpw" [bc812084-1c83-4550-9b4d-7282fdd91698] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009132752s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220511164515-84527 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220511164515-84527 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220511164515-84527 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220511164515-84527 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (126.625444ms)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (124.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (2m4.574105634s)
--- PASS: TestNetworkPlugins/group/cilium/Start (124.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220511164516-84527 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220511164516-84527 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220511164516-84527 replace --force -f testdata/netcat-deployment.yaml: (1.865019913s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-zrbvc" [6974459b-fa8a-41ae-9779-781baba9d517] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0511 17:03:47.478789   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-zrbvc" [6974459b-fa8a-41ae-9779-781baba9d517] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.006838004s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220511164516-84527 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220511164516-84527 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220511164516-84527 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220511164516-84527 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.127468861s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-t7lp6" [5e4016c9-2e92-4f29-aa6c-d482ed0c8801] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.023080484s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220511164516-84527 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220511164516-84527 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220511164516-84527 replace --force -f testdata/netcat-deployment.yaml: (2.619219626s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-qsm5m" [ad106219-021e-4245-a60b-e512793d12f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-qsm5m" [ad106219-021e-4245-a60b-e512793d12f0] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.013576726s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220511164516-84527 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220511164516-84527 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220511164516-84527 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (65.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 
E0511 17:07:01.155524   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p custom-weave-20220511164516-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : (1m5.331408075s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (65.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-weave-20220511164516-84527 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (12.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context custom-weave-20220511164516-84527 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context custom-weave-20220511164516-84527 replace --force -f testdata/netcat-deployment.yaml: (1.934566351s)
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-ngw8b" [9e68407d-3ce8-480a-bbc4-65a35aa0b23e] Pending
helpers_test.go:342: "netcat-668db85669-ngw8b" [9e68407d-3ce8-480a-bbc4-65a35aa0b23e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-ngw8b" [9e68407d-3ce8-480a-bbc4-65a35aa0b23e] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 11.015227654s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (12.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220511164515-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0511 17:07:35.264088   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 17:08:10.306410   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:10.311792   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:10.325101   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:10.350265   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:10.400117   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:10.483600   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:10.650107   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:10.975150   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:11.625133   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:12.905517   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:15.475170   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:20.600263   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:08:24.257473   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 17:08:30.850342   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220511164515-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (57.268658932s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220511164515-84527 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220511164515-84527 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220511164515-84527 replace --force -f testdata/netcat-deployment.yaml: (1.996880238s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-br2qm" [ba6fcc48-ec5d-4cbc-bf7a-b1eee363021d] Pending
E0511 17:08:34.650418   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-br2qm" [ba6fcc48-ec5d-4cbc-bf7a-b1eee363021d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-br2qm" [ba6fcc48-ec5d-4cbc-bf7a-b1eee363021d] Running
E0511 17:08:41.502576   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:41.508134   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:41.518521   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:41.544235   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:41.585889   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:41.674536   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:41.836206   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:42.157429   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:42.800563   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:08:44.084723   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.010891423s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220511164515-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
E0511 17:14:57.751489   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 17:14:59.972881   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:15:42.415526   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:16:10.133357   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220511164515-84527 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (1m25.623316828s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220511164515-84527 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220511164515-84527 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context bridge-20220511164515-84527 replace --force -f testdata/netcat-deployment.yaml: (1.897526258s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-kftsk" [7e6b29e7-1248-4c76-bc5a-d1887c655f32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-kftsk" [7e6b29e7-1248-4c76-bc5a-d1887c655f32] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.007106323s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220511172231-84527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0511 17:22:35.316910   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 17:23:10.356733   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:23:34.429399   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:23:34.697371   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 17:23:41.548559   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:23:47.524783   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 17:24:02.160213   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:24:33.472021   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20220511172231-84527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: (2m27.878882775s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220511172231-84527 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context old-k8s-version-20220511172231-84527 create -f testdata/busybox.yaml: (2.002101794s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [212877ec-4e27-4d5c-8dd3-d90919dc5735] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [212877ec-4e27-4d5c-8dd3-d90919dc5735] Running
E0511 17:25:04.314257   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 17:25:04.635839   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.019147944s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220511172231-84527 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220511172231-84527 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220511172231-84527 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (18.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220511172231-84527 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220511172231-84527 --alsologtostderr -v=3: (18.735880736s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (18.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220511172231-84527 -n old-k8s-version-20220511172231-84527
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220511172231-84527 -n old-k8s-version-20220511172231-84527: exit status 7 (167.735237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220511172231-84527 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220511172231-84527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20220511172231-84527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: (48.024869104s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220511172231-84527 -n old-k8s-version-20220511172231-84527
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (322.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220511172536-84527 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.5
E0511 17:25:42.467501   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220511172536-84527 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.5: (5m22.150849045s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (322.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (24.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0511 17:26:20.311686   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:20.318190   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:20.328338   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:20.353734   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:20.400692   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:20.486450   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:20.651642   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:20.976587   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:21.625774   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:22.912861   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-sp429" [26798401-526a-4cb2-be72-915df5d95af5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0511 17:26:25.475858   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:26:30.601438   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-sp429" [26798401-526a-4cb2-be72-915df5d95af5] Running
E0511 17:26:40.851262   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 24.015608429s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (24.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-sp429" [26798401-526a-4cb2-be72-915df5d95af5] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007158038s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220511172231-84527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context old-k8s-version-20220511172231-84527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (2.265052836s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-20220511172231-84527 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-20220511172231-84527 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220511172231-84527 -n old-k8s-version-20220511172231-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220511172231-84527 -n old-k8s-version-20220511172231-84527: exit status 2 (658.656117ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220511172231-84527 -n old-k8s-version-20220511172231-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220511172231-84527 -n old-k8s-version-20220511172231-84527: exit status 2 (678.73215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-20220511172231-84527 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220511172231-84527 -n old-k8s-version-20220511172231-84527
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220511172231-84527 -n old-k8s-version-20220511172231-84527
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220511172703-84527 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6-rc.0
E0511 17:27:05.550419   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:27:16.150838   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:27:35.323571   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 17:27:42.293974   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:28:10.364805   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:28:34.432300   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:28:34.706184   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 17:28:39.231807   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:28:41.556380   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:28:47.531599   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220511172703-84527 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6-rc.0: (1m54.636152759s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220511172703-84527 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context no-preload-20220511172703-84527 create -f testdata/busybox.yaml: (1.976097165s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [d99eeef0-4e1e-45c8-8996-3543714d0b50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [d99eeef0-4e1e-45c8-8996-3543714d0b50] Running
E0511 17:29:04.216556   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.012832686s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220511172703-84527 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220511172703-84527 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220511172703-84527 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (19.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220511172703-84527 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220511172703-84527 --alsologtostderr -v=3: (19.796130688s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (19.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220511172703-84527 -n no-preload-20220511172703-84527
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220511172703-84527 -n no-preload-20220511172703-84527: exit status 7 (176.840532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220511172703-84527 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (387.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220511172703-84527 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6-rc.0
E0511 17:30:01.024852   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:01.030372   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:01.040573   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:01.061497   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:01.106628   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:01.189336   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:01.351017   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:01.672930   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:02.313147   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:03.593571   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:06.155998   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:11.280010   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:21.520496   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:42.001693   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:30:42.475778   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220511172703-84527 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6-rc.0: (6m26.220619444s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220511172703-84527 -n no-preload-20220511172703-84527
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (387.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220511172536-84527 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context embed-certs-20220511172536-84527 create -f testdata/busybox.yaml: (1.916634963s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e72dcc22-a9d2-46c3-990b-139474db13b7] Pending
helpers_test.go:342: "busybox" [e72dcc22-a9d2-46c3-990b-139474db13b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [e72dcc22-a9d2-46c3-990b-139474db13b7] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.011105138s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220511172536-84527 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220511172536-84527 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220511172536-84527 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (19.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220511172536-84527 --alsologtostderr -v=3
E0511 17:31:20.319119   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:31:22.963870   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220511172536-84527 --alsologtostderr -v=3: (19.490195235s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (19.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220511172536-84527 -n embed-certs-20220511172536-84527
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220511172536-84527 -n embed-certs-20220511172536-84527: exit status 7 (169.332749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220511172536-84527 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (611.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220511172536-84527 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.5
E0511 17:31:37.812953   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 17:31:48.060830   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:32:01.224396   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 17:32:16.158354   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:32:35.333609   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 17:32:44.886972   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:33:10.373693   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:33:34.434374   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:33:34.712822   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 17:33:41.561837   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:33:47.538300   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 17:34:57.552099   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:35:01.043444   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:35:28.744901   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:35:38.460642   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 17:35:42.496345   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220511172536-84527 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.5: (10m10.653184943s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220511172536-84527 -n embed-certs-20220511172536-84527
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (611.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-hb5vs" [fa91745c-4902-4eae-b875-8aeb4f962756] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-8469778f77-hb5vs" [fa91745c-4902-4eae-b875-8aeb4f962756] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.012709075s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-hb5vs" [fa91745c-4902-4eae-b875-8aeb4f962756] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013840572s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220511172703-84527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context no-preload-20220511172703-84527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.940411875s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220511172703-84527 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220511172703-84527 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220511172703-84527 -n no-preload-20220511172703-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220511172703-84527 -n no-preload-20220511172703-84527: exit status 2 (668.700249ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220511172703-84527 -n no-preload-20220511172703-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220511172703-84527 -n no-preload-20220511172703-84527: exit status 2 (659.589059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220511172703-84527 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220511172703-84527 -n no-preload-20220511172703-84527
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220511172703-84527 -n no-preload-20220511172703-84527
E0511 17:36:20.347276   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (331.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220511173638-84527 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.5
E0511 17:37:01.242994   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 17:37:16.183783   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory
E0511 17:37:35.352457   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
E0511 17:38:10.393507   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:38:30.673999   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 17:38:34.456457   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:38:34.733408   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
E0511 17:38:41.579944   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:38:47.562309   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 17:39:00.117665   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:00.122806   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:00.134692   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:00.164832   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:00.204990   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:00.286691   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:00.447214   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:00.767337   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:01.415408   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:02.696163   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:05.262267   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:10.386995   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:20.627620   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:39:41.114472   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:40:01.052843   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/old-k8s-version-20220511172231-84527/client.crt: no such file or directory
E0511 17:40:22.078231   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:40:42.505852   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:41:13.514377   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory
E0511 17:41:20.355445   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220511173638-84527 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.5: (5m31.779899862s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (331.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-t8wfn" [9cb3c0a9-8840-4eba-bcbf-446a162bda30] Running
E0511 17:41:44.005384   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:41:44.356115   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/addons-20220511155445-84527/client.crt: no such file or directory
E0511 17:41:44.676840   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013938s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-t8wfn" [9cb3c0a9-8840-4eba-bcbf-446a162bda30] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011379826s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220511172536-84527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context embed-certs-20220511172536-84527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.89017303s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220511172536-84527 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220511172536-84527 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-darwin-amd64 pause -p embed-certs-20220511172536-84527 --alsologtostderr -v=1: (1.746460638s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220511172536-84527 -n embed-certs-20220511172536-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220511172536-84527 -n embed-certs-20220511172536-84527: exit status 2 (660.850006ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220511172536-84527 -n embed-certs-20220511172536-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220511172536-84527 -n embed-certs-20220511172536-84527: exit status 2 (656.850181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220511172536-84527 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220511172536-84527 -n embed-certs-20220511172536-84527
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220511172536-84527 -n embed-certs-20220511172536-84527
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220511173638-84527 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context default-k8s-different-port-20220511173638-84527 create -f testdata/busybox.yaml: (1.891470259s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [133763ef-c1ea-4bf7-bad1-39b33868158a] Pending
helpers_test.go:342: "busybox" [133763ef-c1ea-4bf7-bad1-39b33868158a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [133763ef-c1ea-4bf7-bad1-39b33868158a] Running
E0511 17:42:16.188382   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220511164516-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.01226458s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220511173638-84527 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220511174217-84527 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220511174217-84527 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6-rc.0: (1m0.924427215s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220511173638-84527 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220511173638-84527 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (17.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220511173638-84527 --alsologtostderr -v=3
E0511 17:42:35.362443   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220511173638-84527 --alsologtostderr -v=3: (17.820409328s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (17.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220511173638-84527 -n default-k8s-different-port-20220511173638-84527
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220511173638-84527 -n default-k8s-different-port-20220511173638-84527: exit status 7 (203.893123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220511173638-84527 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (580.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220511173638-84527 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.5
E0511 17:42:43.459809   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/bridge-20220511164515-84527/client.crt: no such file or directory
E0511 17:43:10.410569   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/auto-20220511164515-84527/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220511173638-84527 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.5: (9m39.473651481s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220511173638-84527 -n default-k8s-different-port-20220511173638-84527
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (580.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220511174217-84527 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (19.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220511174217-84527 --alsologtostderr -v=3
E0511 17:43:34.466031   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/enable-default-cni-20220511164515-84527/client.crt: no such file or directory
E0511 17:43:34.748715   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/functional-20220511160019-84527/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220511174217-84527 --alsologtostderr -v=3: (19.794567112s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (19.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220511174217-84527 -n newest-cni-20220511174217-84527
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220511174217-84527 -n newest-cni-20220511174217-84527: exit status 7 (187.280504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220511174217-84527 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (58.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220511174217-84527 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6-rc.0
E0511 17:43:41.588910   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/false-20220511164516-84527/client.crt: no such file or directory
E0511 17:43:45.591246   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/cilium-20220511164516-84527/client.crt: no such file or directory
E0511 17:43:47.568330   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/skaffold-20220511164202-84527/client.crt: no such file or directory
E0511 17:44:00.126657   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
E0511 17:44:27.850693   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/no-preload-20220511172703-84527/client.crt: no such file or directory
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220511174217-84527 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6-rc.0: (57.659585838s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220511174217-84527 -n newest-cni-20220511174217-84527
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (58.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220511174217-84527 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220511174217-84527 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220511174217-84527 -n newest-cni-20220511174217-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220511174217-84527 -n newest-cni-20220511174217-84527: exit status 2 (648.573904ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220511174217-84527 -n newest-cni-20220511174217-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220511174217-84527 -n newest-cni-20220511174217-84527: exit status 2 (686.607306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220511174217-84527 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220511174217-84527 -n newest-cni-20220511174217-84527
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220511174217-84527 -n newest-cni-20220511174217-84527
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-8f2fh" [40670b4c-05be-4a4a-ad27-f0b61f495141] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013824397s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-8f2fh" [40670b4c-05be-4a4a-ad27-f0b61f495141] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011292336s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220511173638-84527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context default-k8s-different-port-20220511173638-84527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.931770699s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220511173638-84527 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220511173638-84527 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220511173638-84527 -n default-k8s-different-port-20220511173638-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220511173638-84527 -n default-k8s-different-port-20220511173638-84527: exit status 2 (648.773271ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220511173638-84527 -n default-k8s-different-port-20220511173638-84527
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220511173638-84527 -n default-k8s-different-port-20220511173638-84527: exit status 2 (636.577546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220511173638-84527 --alsologtostderr -v=1
E0511 17:52:35.377691   84527 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13639-82908-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/ingress-addon-legacy-20220511160505-84527/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220511173638-84527 -n default-k8s-different-port-20220511173638-84527
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220511173638-84527 -n default-k8s-different-port-20220511173638-84527
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (4.45s)

                                                
                                    

Test skip (20/280)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.5/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.5/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 14.851132ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-qd5jh" [7125f00b-f49e-4317-b07a-1c10429be016] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.018780462s
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-2ww88" [43807791-e251-4a56-b063-cb540bc2cc1b] Running
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010417544s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220511155445-84527 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220511155445-84527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220511155445-84527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.690167372s)
addons_test.go:305: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (12.79s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220511155445-84527 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220511155445-84527 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220511155445-84527 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [f64678a4-54bd-4e81-8e33-2a18671a6706] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [f64678a4-54bd-4e81-8e33-2a18671a6706] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013988747s
addons_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220511155445-84527 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.04s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1561: (dbg) Run:  kubectl --context functional-20220511160019-84527 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1567: (dbg) Run:  kubectl --context functional-20220511160019-84527 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-lbqm8" [7bf9fcee-2f87-41c4-b6ff-97caeee67c8d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-74cf8bc446-lbqm8" [7bf9fcee-2f87-41c4-b6ff-97caeee67c8d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.007081431s
functional_test.go:1578: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (11.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (35.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511160505-84527 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220511160505-84527 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.678111937s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:182: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (214.446451ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.111.163.159:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:182: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (161.404662ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.111.163.159:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:182: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (161.426373ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.111.163.159:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:182: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (146.200124ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.111.163.159:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511160505-84527 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [ec62a5eb-7775-432a-9f95-cc1fa2454c79] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [ec62a5eb-7775-432a-9f95-cc1fa2454c79] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.008872115s
addons_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220511160505-84527 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (35.16s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220511164515-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220511164515-84527
--- SKIP: TestNetworkPlugins/group/flannel (0.89s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220511172702-84527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220511172702-84527
--- SKIP: TestStartStop/group/disable-driver-mounts (0.95s)

                                                
                                    
Copied to clipboard