Test Report: Docker_macOS 15232

                    
                      0194cc3582ecd25a736ac3660bc9effa677f982b:2022-11-01:26370
                    
                

Test fail (15/295)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (254.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-155406 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1101 15:55:03.191932    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:57:19.340655    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:57:44.170875    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.176386    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.188605    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.210880    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.251576    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.333777    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.494086    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.814170    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:45.455030    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:46.737349    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:47.029713    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:57:49.297483    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:54.417837    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:58:04.657867    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-155406 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.310729265s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-155406] minikube v1.27.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-155406 in cluster ingress-addon-legacy-155406
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 15:54:06.919419    6026 out.go:296] Setting OutFile to fd 1 ...
	I1101 15:54:06.919582    6026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:54:06.919587    6026 out.go:309] Setting ErrFile to fd 2...
	I1101 15:54:06.919591    6026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:54:06.919698    6026 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 15:54:06.920254    6026 out.go:303] Setting JSON to false
	I1101 15:54:06.939076    6026 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1421,"bootTime":1667341825,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 15:54:06.939223    6026 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 15:54:06.961780    6026 out.go:177] * [ingress-addon-legacy-155406] minikube v1.27.1 on Darwin 13.0
	I1101 15:54:07.005783    6026 notify.go:220] Checking for updates...
	I1101 15:54:07.027310    6026 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 15:54:07.048728    6026 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 15:54:07.070763    6026 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 15:54:07.092573    6026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 15:54:07.113710    6026 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 15:54:07.135830    6026 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 15:54:07.197160    6026 docker.go:137] docker version: linux-20.10.20
	I1101 15:54:07.197319    6026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 15:54:07.338629    6026 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-01 22:54:07.257675909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 15:54:07.382313    6026 out.go:177] * Using the docker driver based on user configuration
	I1101 15:54:07.403410    6026 start.go:282] selected driver: docker
	I1101 15:54:07.403437    6026 start.go:808] validating driver "docker" against <nil>
	I1101 15:54:07.403467    6026 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 15:54:07.407285    6026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 15:54:07.549483    6026 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-01 22:54:07.469000737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 15:54:07.549594    6026 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1101 15:54:07.549731    6026 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 15:54:07.571565    6026 out.go:177] * Using Docker Desktop driver with root privileges
	I1101 15:54:07.593464    6026 cni.go:95] Creating CNI manager for ""
	I1101 15:54:07.593496    6026 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 15:54:07.593512    6026 start_flags.go:317] config:
	{Name:ingress-addon-legacy-155406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-155406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 15:54:07.615176    6026 out.go:177] * Starting control plane node ingress-addon-legacy-155406 in cluster ingress-addon-legacy-155406
	I1101 15:54:07.657474    6026 cache.go:120] Beginning downloading kic base image for docker with docker
	I1101 15:54:07.679465    6026 out.go:177] * Pulling base image ...
	I1101 15:54:07.722404    6026 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1101 15:54:07.722464    6026 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 15:54:07.777831    6026 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 15:54:07.777853    6026 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 15:54:07.806734    6026 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1101 15:54:07.806764    6026 cache.go:57] Caching tarball of preloaded images
	I1101 15:54:07.807167    6026 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1101 15:54:07.851201    6026 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1101 15:54:07.872430    6026 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1101 15:54:07.959227    6026 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1101 15:54:12.132975    6026 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1101 15:54:12.133241    6026 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1101 15:54:12.750863    6026 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1101 15:54:12.751136    6026 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/config.json ...
	I1101 15:54:12.751166    6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/config.json: {Name:mkc057165dd22cb54ce9b6c28b65dd8e7b7e727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 15:54:12.751472    6026 cache.go:208] Successfully downloaded all kic artifacts
	I1101 15:54:12.751497    6026 start.go:364] acquiring machines lock for ingress-addon-legacy-155406: {Name:mk2aef93171a1a7629f910f37708b3772b41b4c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 15:54:12.751628    6026 start.go:368] acquired machines lock for "ingress-addon-legacy-155406" in 124.139µs
	I1101 15:54:12.751655    6026 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-155406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-155406 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 15:54:12.751738    6026 start.go:125] createHost starting for "" (driver="docker")
	I1101 15:54:12.796760    6026 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 15:54:12.797000    6026 start.go:159] libmachine.API.Create for "ingress-addon-legacy-155406" (driver="docker")
	I1101 15:54:12.797046    6026 client.go:168] LocalClient.Create starting
	I1101 15:54:12.797163    6026 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem
	I1101 15:54:12.797250    6026 main.go:134] libmachine: Decoding PEM data...
	I1101 15:54:12.797266    6026 main.go:134] libmachine: Parsing certificate...
	I1101 15:54:12.797319    6026 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem
	I1101 15:54:12.797386    6026 main.go:134] libmachine: Decoding PEM data...
	I1101 15:54:12.797396    6026 main.go:134] libmachine: Parsing certificate...
	I1101 15:54:12.797981    6026 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-155406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 15:54:12.857239    6026 cli_runner.go:211] docker network inspect ingress-addon-legacy-155406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 15:54:12.857364    6026 network_create.go:272] running [docker network inspect ingress-addon-legacy-155406] to gather additional debugging logs...
	I1101 15:54:12.857390    6026 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-155406
	W1101 15:54:12.914199    6026 cli_runner.go:211] docker network inspect ingress-addon-legacy-155406 returned with exit code 1
	I1101 15:54:12.914224    6026 network_create.go:275] error running [docker network inspect ingress-addon-legacy-155406]: docker network inspect ingress-addon-legacy-155406: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-155406
	I1101 15:54:12.914248    6026 network_create.go:277] output of [docker network inspect ingress-addon-legacy-155406]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-155406
	
	** /stderr **
	I1101 15:54:12.914352    6026 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 15:54:12.969924    6026 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b104c0] misses:0}
	I1101 15:54:12.969963    6026 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 15:54:12.969975    6026 network_create.go:115] attempt to create docker network ingress-addon-legacy-155406 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 15:54:12.970072    6026 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-155406 ingress-addon-legacy-155406
	I1101 15:54:13.057836    6026 network_create.go:99] docker network ingress-addon-legacy-155406 192.168.49.0/24 created
	I1101 15:54:13.057881    6026 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-155406" container
	I1101 15:54:13.058016    6026 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 15:54:13.114962    6026 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-155406 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-155406 --label created_by.minikube.sigs.k8s.io=true
	I1101 15:54:13.171685    6026 oci.go:103] Successfully created a docker volume ingress-addon-legacy-155406
	I1101 15:54:13.171821    6026 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-155406-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-155406 --entrypoint /usr/bin/test -v ingress-addon-legacy-155406:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1101 15:54:13.771009    6026 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-155406
	I1101 15:54:13.771059    6026 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1101 15:54:13.771073    6026 kic.go:179] Starting extracting preloaded images to volume ...
	I1101 15:54:13.771201    6026 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-155406:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 15:54:18.843420    6026 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-155406:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (5.072248804s)
	I1101 15:54:18.843442    6026 kic.go:188] duration metric: took 5.072492 seconds to extract preloaded images to volume
	I1101 15:54:18.843564    6026 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 15:54:18.986566    6026 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-155406 --name ingress-addon-legacy-155406 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-155406 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-155406 --network ingress-addon-legacy-155406 --ip 192.168.49.2 --volume ingress-addon-legacy-155406:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1101 15:54:19.340404    6026 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Running}}
	I1101 15:54:19.401162    6026 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Status}}
	I1101 15:54:19.463125    6026 cli_runner.go:164] Run: docker exec ingress-addon-legacy-155406 stat /var/lib/dpkg/alternatives/iptables
	I1101 15:54:19.585807    6026 oci.go:144] the created container "ingress-addon-legacy-155406" has a running status.
	I1101 15:54:19.585841    6026 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa...
	I1101 15:54:19.787116    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 15:54:19.787204    6026 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 15:54:19.894883    6026 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Status}}
	I1101 15:54:19.953285    6026 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 15:54:19.953311    6026 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-155406 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 15:54:20.060715    6026 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Status}}
	I1101 15:54:20.117785    6026 machine.go:88] provisioning docker machine ...
	I1101 15:54:20.117831    6026 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-155406"
	I1101 15:54:20.117944    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:20.177198    6026 main.go:134] libmachine: Using SSH client type: native
	I1101 15:54:20.177401    6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50503 <nil> <nil>}
	I1101 15:54:20.177417    6026 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-155406 && echo "ingress-addon-legacy-155406" | sudo tee /etc/hostname
	I1101 15:54:20.304005    6026 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-155406
	
	I1101 15:54:20.304129    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:20.364870    6026 main.go:134] libmachine: Using SSH client type: native
	I1101 15:54:20.365043    6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50503 <nil> <nil>}
	I1101 15:54:20.365064    6026 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-155406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-155406/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-155406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 15:54:20.482437    6026 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 15:54:20.482455    6026 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15232-2108/.minikube CaCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15232-2108/.minikube}
	I1101 15:54:20.482480    6026 ubuntu.go:177] setting up certificates
	I1101 15:54:20.482488    6026 provision.go:83] configureAuth start
	I1101 15:54:20.482575    6026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-155406
	I1101 15:54:20.539273    6026 provision.go:138] copyHostCerts
	I1101 15:54:20.539317    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
	I1101 15:54:20.539385    6026 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem, removing ...
	I1101 15:54:20.539392    6026 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
	I1101 15:54:20.539513    6026 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem (1675 bytes)
	I1101 15:54:20.539705    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
	I1101 15:54:20.539742    6026 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem, removing ...
	I1101 15:54:20.539748    6026 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
	I1101 15:54:20.539815    6026 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem (1082 bytes)
	I1101 15:54:20.539940    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
	I1101 15:54:20.539973    6026 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem, removing ...
	I1101 15:54:20.539978    6026 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
	I1101 15:54:20.540044    6026 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem (1123 bytes)
	I1101 15:54:20.540192    6026 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-155406 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-155406]
	I1101 15:54:20.824844    6026 provision.go:172] copyRemoteCerts
	I1101 15:54:20.824907    6026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 15:54:20.824969    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:20.884535    6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
	I1101 15:54:20.973548    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 15:54:20.973640    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 15:54:20.990289    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 15:54:20.990365    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 15:54:21.008145    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 15:54:21.008222    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1101 15:54:21.025338    6026 provision.go:86] duration metric: configureAuth took 542.849779ms
	I1101 15:54:21.025350    6026 ubuntu.go:193] setting minikube options for container-runtime
	I1101 15:54:21.025529    6026 config.go:180] Loaded profile config "ingress-addon-legacy-155406": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1101 15:54:21.025617    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:21.082892    6026 main.go:134] libmachine: Using SSH client type: native
	I1101 15:54:21.083050    6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50503 <nil> <nil>}
	I1101 15:54:21.083068    6026 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 15:54:21.202666    6026 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1101 15:54:21.202684    6026 ubuntu.go:71] root file system type: overlay
	I1101 15:54:21.202847    6026 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 15:54:21.202950    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:21.262257    6026 main.go:134] libmachine: Using SSH client type: native
	I1101 15:54:21.262421    6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50503 <nil> <nil>}
	I1101 15:54:21.262475    6026 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 15:54:21.388424    6026 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 15:54:21.388552    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:21.445284    6026 main.go:134] libmachine: Using SSH client type: native
	I1101 15:54:21.445457    6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50503 <nil> <nil>}
	I1101 15:54:21.445470    6026 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 15:54:22.031631    6026 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 22:54:21.406669928 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1101 15:54:22.031651    6026 machine.go:91] provisioned docker machine in 1.913893379s
	I1101 15:54:22.031707    6026 client.go:171] LocalClient.Create took 9.234851349s
	I1101 15:54:22.031730    6026 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-155406" took 9.234953934s
	I1101 15:54:22.031758    6026 start.go:300] post-start starting for "ingress-addon-legacy-155406" (driver="docker")
	I1101 15:54:22.031788    6026 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 15:54:22.031907    6026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 15:54:22.032020    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:22.091848    6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
	I1101 15:54:22.183883    6026 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 15:54:22.187639    6026 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 15:54:22.187656    6026 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 15:54:22.187663    6026 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 15:54:22.187669    6026 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 15:54:22.187679    6026 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/addons for local assets ...
	I1101 15:54:22.187777    6026 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/files for local assets ...
	I1101 15:54:22.187975    6026 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> 34132.pem in /etc/ssl/certs
	I1101 15:54:22.187981    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> /etc/ssl/certs/34132.pem
	I1101 15:54:22.188212    6026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 15:54:22.195033    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /etc/ssl/certs/34132.pem (1708 bytes)
	I1101 15:54:22.211869    6026 start.go:303] post-start completed in 180.082611ms
	I1101 15:54:22.212468    6026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-155406
	I1101 15:54:22.269896    6026 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/config.json ...
	I1101 15:54:22.270336    6026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 15:54:22.270429    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:22.328666    6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
	I1101 15:54:22.413168    6026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 15:54:22.417736    6026 start.go:128] duration metric: createHost completed in 9.666221377s
	I1101 15:54:22.417754    6026 start.go:83] releasing machines lock for "ingress-addon-legacy-155406", held for 9.666352131s
	I1101 15:54:22.417857    6026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-155406
	I1101 15:54:22.476642    6026 ssh_runner.go:195] Run: systemctl --version
	I1101 15:54:22.476659    6026 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1101 15:54:22.476733    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:22.476739    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:22.541662    6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
	I1101 15:54:22.541647    6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
	I1101 15:54:22.887654    6026 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 15:54:22.898191    6026 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1101 15:54:22.898254    6026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 15:54:22.907139    6026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 15:54:22.919943    6026 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 15:54:22.995665    6026 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 15:54:23.062239    6026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 15:54:23.126468    6026 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 15:54:23.328830    6026 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 15:54:23.357001    6026 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 15:54:23.410650    6026 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
	I1101 15:54:23.410835    6026 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-155406 dig +short host.docker.internal
	I1101 15:54:23.528557    6026 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1101 15:54:23.528677    6026 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1101 15:54:23.533157    6026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 15:54:23.543191    6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:54:23.601432    6026 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1101 15:54:23.601514    6026 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 15:54:23.625463    6026 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1101 15:54:23.625512    6026 docker.go:543] Images already preloaded, skipping extraction
	I1101 15:54:23.625640    6026 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 15:54:23.649285    6026 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1101 15:54:23.649307    6026 cache_images.go:84] Images are preloaded, skipping loading
	I1101 15:54:23.649399    6026 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 15:54:23.714872    6026 cni.go:95] Creating CNI manager for ""
	I1101 15:54:23.714887    6026 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 15:54:23.714900    6026 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 15:54:23.714926    6026 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-155406 NodeName:ingress-addon-legacy-155406 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 15:54:23.715041    6026 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-155406"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 15:54:23.715125    6026 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-155406 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-155406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 15:54:23.715199    6026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1101 15:54:23.722737    6026 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 15:54:23.722813    6026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 15:54:23.729830    6026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1101 15:54:23.742386    6026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1101 15:54:23.754895    6026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I1101 15:54:23.769689    6026 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 15:54:23.773386    6026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 15:54:23.782543    6026 certs.go:54] Setting up /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406 for IP: 192.168.49.2
	I1101 15:54:23.782712    6026 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key
	I1101 15:54:23.782846    6026 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key
	I1101 15:54:23.782935    6026 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.key
	I1101 15:54:23.782991    6026 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.crt with IP's: []
	I1101 15:54:23.900664    6026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.crt ...
	I1101 15:54:23.900678    6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.crt: {Name:mkdb4aa1fb2a3c4956f9cfe604c0e6ab8b485639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 15:54:23.900989    6026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.key ...
	I1101 15:54:23.900997    6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.key: {Name:mka4d335a3ca4cb9187a8ce6c14e2b88f7f8f4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 15:54:23.901231    6026 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key.dd3b5fb2
	I1101 15:54:23.901253    6026 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 15:54:24.003608    6026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt.dd3b5fb2 ...
	I1101 15:54:24.003618    6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt.dd3b5fb2: {Name:mk14ccf3269d74b2d967c3f64898b38556e93b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 15:54:24.003864    6026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key.dd3b5fb2 ...
	I1101 15:54:24.003872    6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key.dd3b5fb2: {Name:mka869dae2da2e9cb17bc526aec28ae2f2248554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 15:54:24.004069    6026 certs.go:320] copying /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt
	I1101 15:54:24.004239    6026 certs.go:324] copying /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key
	I1101 15:54:24.004403    6026 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key
	I1101 15:54:24.004422    6026 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt with IP's: []
	I1101 15:54:24.147404    6026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt ...
	I1101 15:54:24.147413    6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt: {Name:mk943ba14cc85b79d0c50b2da8c14438e6db01a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 15:54:24.147661    6026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key ...
	I1101 15:54:24.147669    6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key: {Name:mk013fa177ec7ed4e53db51ab8122c7d9611f8b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 15:54:24.147865    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 15:54:24.147898    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 15:54:24.147921    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 15:54:24.147951    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 15:54:24.147974    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 15:54:24.147996    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 15:54:24.148015    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 15:54:24.148035    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 15:54:24.148133    6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem (1338 bytes)
	W1101 15:54:24.148180    6026 certs.go:384] ignoring /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413_empty.pem, impossibly tiny 0 bytes
	I1101 15:54:24.148191    6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 15:54:24.148232    6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem (1082 bytes)
	I1101 15:54:24.148267    6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem (1123 bytes)
	I1101 15:54:24.148299    6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem (1675 bytes)
	I1101 15:54:24.148385    6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem (1708 bytes)
	I1101 15:54:24.148429    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> /usr/share/ca-certificates/34132.pem
	I1101 15:54:24.148457    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 15:54:24.148479    6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem -> /usr/share/ca-certificates/3413.pem
	I1101 15:54:24.148977    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 15:54:24.167867    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 15:54:24.185388    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 15:54:24.202148    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 15:54:24.218808    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 15:54:24.235809    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 15:54:24.252894    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 15:54:24.270077    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 15:54:24.286748    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /usr/share/ca-certificates/34132.pem (1708 bytes)
	I1101 15:54:24.304011    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 15:54:24.321160    6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem --> /usr/share/ca-certificates/3413.pem (1338 bytes)
	I1101 15:54:24.338256    6026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 15:54:24.351530    6026 ssh_runner.go:195] Run: openssl version
	I1101 15:54:24.356758    6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34132.pem && ln -fs /usr/share/ca-certificates/34132.pem /etc/ssl/certs/34132.pem"
	I1101 15:54:24.364532    6026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34132.pem
	I1101 15:54:24.368317    6026 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:49 /usr/share/ca-certificates/34132.pem
	I1101 15:54:24.368363    6026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34132.pem
	I1101 15:54:24.373484    6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34132.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 15:54:24.381285    6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 15:54:24.388955    6026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 15:54:24.392981    6026 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 15:54:24.393036    6026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 15:54:24.398196    6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 15:54:24.405953    6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3413.pem && ln -fs /usr/share/ca-certificates/3413.pem /etc/ssl/certs/3413.pem"
	I1101 15:54:24.413804    6026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3413.pem
	I1101 15:54:24.417827    6026 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:49 /usr/share/ca-certificates/3413.pem
	I1101 15:54:24.417877    6026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3413.pem
	I1101 15:54:24.422844    6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3413.pem /etc/ssl/certs/51391683.0"
	I1101 15:54:24.430746    6026 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-155406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-155406 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 15:54:24.430866    6026 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 15:54:24.452825    6026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 15:54:24.461177    6026 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 15:54:24.468304    6026 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 15:54:24.468365    6026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 15:54:24.475716    6026 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 15:54:24.475741    6026 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 15:54:24.522040    6026 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I1101 15:54:24.522120    6026 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 15:54:24.807115    6026 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 15:54:24.807185    6026 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 15:54:24.807280    6026 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 15:54:25.023212    6026 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 15:54:25.023878    6026 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 15:54:25.023912    6026 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1101 15:54:25.093104    6026 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 15:54:25.137486    6026 out.go:204]   - Generating certificates and keys ...
	I1101 15:54:25.137586    6026 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 15:54:25.137651    6026 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 15:54:25.240429    6026 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 15:54:25.334957    6026 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1101 15:54:25.460358    6026 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1101 15:54:25.583179    6026 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1101 15:54:25.876619    6026 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1101 15:54:25.876814    6026 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 15:54:26.053675    6026 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1101 15:54:26.053825    6026 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 15:54:26.278578    6026 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 15:54:26.429244    6026 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 15:54:26.645872    6026 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1101 15:54:26.646072    6026 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 15:54:26.765771    6026 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 15:54:26.885957    6026 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 15:54:27.079505    6026 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 15:54:27.198860    6026 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 15:54:27.199796    6026 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 15:54:27.221390    6026 out.go:204]   - Booting up control plane ...
	I1101 15:54:27.221610    6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 15:54:27.221769    6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 15:54:27.221926    6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 15:54:27.222089    6026 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 15:54:27.222365    6026 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 15:55:07.183037    6026 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 15:55:07.183514    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:55:07.183745    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:55:12.180938    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:55:12.181144    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:55:22.175167    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:55:22.175372    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:55:42.161458    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:55:42.161693    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:56:22.133310    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:56:22.133633    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:56:22.133655    6026 kubeadm.go:317] 
	I1101 15:56:22.133714    6026 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I1101 15:56:22.133804    6026 kubeadm.go:317] 		timed out waiting for the condition
	I1101 15:56:22.133820    6026 kubeadm.go:317] 
	I1101 15:56:22.133899    6026 kubeadm.go:317] 	This error is likely caused by:
	I1101 15:56:22.133958    6026 kubeadm.go:317] 		- The kubelet is not running
	I1101 15:56:22.134093    6026 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 15:56:22.134108    6026 kubeadm.go:317] 
	I1101 15:56:22.134230    6026 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 15:56:22.134298    6026 kubeadm.go:317] 		- 'systemctl status kubelet'
	I1101 15:56:22.134345    6026 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I1101 15:56:22.134349    6026 kubeadm.go:317] 
	I1101 15:56:22.134449    6026 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 15:56:22.134512    6026 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1101 15:56:22.134517    6026 kubeadm.go:317] 
	I1101 15:56:22.134577    6026 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1101 15:56:22.134620    6026 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I1101 15:56:22.134688    6026 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I1101 15:56:22.134716    6026 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I1101 15:56:22.134726    6026 kubeadm.go:317] 
	I1101 15:56:22.137005    6026 kubeadm.go:317] W1101 22:54:24.522045     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1101 15:56:22.137069    6026 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 15:56:22.137173    6026 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
	I1101 15:56:22.137259    6026 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 15:56:22.137377    6026 kubeadm.go:317] W1101 22:54:27.208088     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1101 15:56:22.137481    6026 kubeadm.go:317] W1101 22:54:27.209373     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1101 15:56:22.137553    6026 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 15:56:22.137613    6026 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1101 15:56:22.137793    6026 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1101 22:54:24.522045     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1101 22:54:27.208088     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1101 22:54:27.209373     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1101 22:54:24.522045     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1101 22:54:27.208088     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1101 22:54:27.209373     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1101 15:56:22.137824    6026 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1101 15:56:22.554601    6026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 15:56:22.564217    6026 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 15:56:22.564294    6026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 15:56:22.571953    6026 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 15:56:22.571976    6026 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 15:56:22.618426    6026 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I1101 15:56:22.618481    6026 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 15:56:22.904594    6026 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 15:56:22.904700    6026 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 15:56:22.904783    6026 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 15:56:23.117989    6026 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 15:56:23.118668    6026 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 15:56:23.118716    6026 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1101 15:56:23.187566    6026 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 15:56:23.209012    6026 out.go:204]   - Generating certificates and keys ...
	I1101 15:56:23.209131    6026 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 15:56:23.209189    6026 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 15:56:23.209260    6026 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 15:56:23.209373    6026 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 15:56:23.209497    6026 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 15:56:23.209635    6026 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 15:56:23.209691    6026 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 15:56:23.209751    6026 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 15:56:23.209822    6026 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 15:56:23.209882    6026 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 15:56:23.209916    6026 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 15:56:23.209960    6026 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 15:56:23.275773    6026 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 15:56:23.417891    6026 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 15:56:23.473870    6026 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 15:56:23.642600    6026 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 15:56:23.643043    6026 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 15:56:23.664663    6026 out.go:204]   - Booting up control plane ...
	I1101 15:56:23.664793    6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 15:56:23.664933    6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 15:56:23.665042    6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 15:56:23.665189    6026 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 15:56:23.665468    6026 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 15:57:03.625112    6026 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 15:57:03.626171    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:57:03.626353    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:57:08.624187    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:57:08.624418    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:57:18.616693    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:57:18.616863    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:57:38.603348    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:57:38.603495    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:58:18.573997    6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 15:58:18.574164    6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 15:58:18.574176    6026 kubeadm.go:317] 
	I1101 15:58:18.574208    6026 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I1101 15:58:18.574240    6026 kubeadm.go:317] 		timed out waiting for the condition
	I1101 15:58:18.574244    6026 kubeadm.go:317] 
	I1101 15:58:18.574273    6026 kubeadm.go:317] 	This error is likely caused by:
	I1101 15:58:18.574299    6026 kubeadm.go:317] 		- The kubelet is not running
	I1101 15:58:18.574412    6026 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 15:58:18.574423    6026 kubeadm.go:317] 
	I1101 15:58:18.574502    6026 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 15:58:18.574532    6026 kubeadm.go:317] 		- 'systemctl status kubelet'
	I1101 15:58:18.574562    6026 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I1101 15:58:18.574575    6026 kubeadm.go:317] 
	I1101 15:58:18.574667    6026 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 15:58:18.574737    6026 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1101 15:58:18.574749    6026 kubeadm.go:317] 
	I1101 15:58:18.574835    6026 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1101 15:58:18.574878    6026 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I1101 15:58:18.574943    6026 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I1101 15:58:18.574973    6026 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I1101 15:58:18.574986    6026 kubeadm.go:317] 
	I1101 15:58:18.577301    6026 kubeadm.go:317] W1101 22:56:22.639514    3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1101 15:58:18.577373    6026 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 15:58:18.577486    6026 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
	I1101 15:58:18.577597    6026 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 15:58:18.577727    6026 kubeadm.go:317] W1101 22:56:23.649774    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1101 15:58:18.577835    6026 kubeadm.go:317] W1101 22:56:23.650559    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1101 15:58:18.577903    6026 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 15:58:18.577956    6026 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1101 15:58:18.577986    6026 kubeadm.go:398] StartCluster complete in 3m54.152918724s
	I1101 15:58:18.578086    6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 15:58:18.601189    6026 logs.go:274] 0 containers: []
	W1101 15:58:18.601201    6026 logs.go:276] No container was found matching "kube-apiserver"
	I1101 15:58:18.601283    6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 15:58:18.624636    6026 logs.go:274] 0 containers: []
	W1101 15:58:18.624648    6026 logs.go:276] No container was found matching "etcd"
	I1101 15:58:18.624736    6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 15:58:18.647359    6026 logs.go:274] 0 containers: []
	W1101 15:58:18.647371    6026 logs.go:276] No container was found matching "coredns"
	I1101 15:58:18.647451    6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 15:58:18.669427    6026 logs.go:274] 0 containers: []
	W1101 15:58:18.669438    6026 logs.go:276] No container was found matching "kube-scheduler"
	I1101 15:58:18.669523    6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 15:58:18.691652    6026 logs.go:274] 0 containers: []
	W1101 15:58:18.691663    6026 logs.go:276] No container was found matching "kube-proxy"
	I1101 15:58:18.691746    6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 15:58:18.713432    6026 logs.go:274] 0 containers: []
	W1101 15:58:18.713444    6026 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 15:58:18.713527    6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 15:58:18.735477    6026 logs.go:274] 0 containers: []
	W1101 15:58:18.735491    6026 logs.go:276] No container was found matching "storage-provisioner"
	I1101 15:58:18.735576    6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 15:58:18.757691    6026 logs.go:274] 0 containers: []
	W1101 15:58:18.757703    6026 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 15:58:18.757710    6026 logs.go:123] Gathering logs for container status ...
	I1101 15:58:18.757717    6026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 15:58:20.804664    6026 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046983442s)
	I1101 15:58:20.804845    6026 logs.go:123] Gathering logs for kubelet ...
	I1101 15:58:20.804855    6026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 15:58:20.846199    6026 logs.go:123] Gathering logs for dmesg ...
	I1101 15:58:20.846221    6026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 15:58:20.860840    6026 logs.go:123] Gathering logs for describe nodes ...
	I1101 15:58:20.860853    6026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 15:58:20.915013    6026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 15:58:20.915028    6026 logs.go:123] Gathering logs for Docker ...
	I1101 15:58:20.915035    6026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W1101 15:58:20.930302    6026 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1101 22:56:22.639514    3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1101 22:56:23.649774    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1101 22:56:23.650559    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1101 15:58:20.930323    6026 out.go:239] * 
	* 
	W1101 15:58:20.930468    6026 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1101 22:56:22.639514    3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1101 22:56:23.649774    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1101 22:56:23.650559    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1101 22:56:22.639514    3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1101 22:56:23.649774    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1101 22:56:23.650559    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 15:58:20.930484    6026 out.go:239] * 
	* 
	W1101 15:58:20.931166    6026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 15:58:20.995955    6026 out.go:177] 
	W1101 15:58:21.062246    6026 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1101 22:56:22.639514    3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1101 22:56:23.649774    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1101 22:56:23.650559    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1101 22:56:22.639514    3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1101 22:56:23.649774    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1101 22:56:23.650559    3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 15:58:21.062401    6026 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1101 15:58:21.062535    6026 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1101 15:58:21.104926    6026 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-155406 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-155406 addons enable ingress --alsologtostderr -v=5
E1101 15:58:25.138085    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:59:06.098336    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-155406 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.123347949s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 15:58:21.256040    6339 out.go:296] Setting OutFile to fd 1 ...
	I1101 15:58:21.256385    6339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:58:21.256391    6339 out.go:309] Setting ErrFile to fd 2...
	I1101 15:58:21.256395    6339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:58:21.256525    6339 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 15:58:21.278590    6339 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1101 15:58:21.301068    6339 config.go:180] Loaded profile config "ingress-addon-legacy-155406": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1101 15:58:21.301098    6339 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-155406"
	I1101 15:58:21.301117    6339 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-155406"
	I1101 15:58:21.301683    6339 host.go:66] Checking if "ingress-addon-legacy-155406" exists ...
	I1101 15:58:21.302636    6339 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Status}}
	I1101 15:58:21.382432    6339 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1101 15:58:21.403967    6339 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I1101 15:58:21.425168    6339 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1101 15:58:21.446149    6339 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1101 15:58:21.467016    6339 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 15:58:21.467040    6339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I1101 15:58:21.467145    6339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:58:21.524669    6339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
	I1101 15:58:21.616424    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:21.667917    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:21.667937    6339 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:21.944700    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:21.996752    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:21.996776    6339 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:22.537883    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:22.589650    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:22.589665    6339 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:23.246974    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:23.299177    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:23.299192    6339 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:24.092624    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:24.146635    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:24.146652    6339 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:25.318039    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:25.370554    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:25.370574    6339 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:27.624092    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:27.674868    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:27.674887    6339 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:29.287854    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:29.342628    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:29.342650    6339 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:32.147773    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:32.200158    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:32.200175    6339 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:36.027267    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:36.079799    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:36.079815    6339 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:43.779327    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:43.834778    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:43.834792    6339 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:58.472291    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:58:58.527026    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:58:58.527042    6339 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:26.934359    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:59:26.986098    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:26.986111    6339 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:50.156214    6339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1101 15:59:50.207587    6339 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:50.207621    6339 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-155406"
	I1101 15:59:50.229343    6339 out.go:177] * Verifying ingress addon...
	I1101 15:59:50.252298    6339 out.go:177] 
	W1101 15:59:50.274317    6339 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-155406" does not exist: client config: context "ingress-addon-legacy-155406" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-155406" does not exist: client config: context "ingress-addon-legacy-155406" does not exist]
	W1101 15:59:50.274346    6339 out.go:239] * 
	* 
	W1101 15:59:50.278318    6339 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 15:59:50.300067    6339 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-155406
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-155406:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef",
	        "Created": "2022-11-01T22:54:19.061504612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 39894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T22:54:19.350349345Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/hostname",
	        "HostsPath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/hosts",
	        "LogPath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef-json.log",
	        "Name": "/ingress-addon-legacy-155406",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-155406:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-155406",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-155406",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-155406/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-155406",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-155406",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-155406",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1d9cc49644cc1dadf2d5ae7edb8050d986225c274b9a3f96eab64839c737fef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50506"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50507"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b1d9cc49644c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-155406": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b438431a5a88",
	                        "ingress-addon-legacy-155406"
	                    ],
	                    "NetworkID": "ba7421a9ce8682d3ba3fd6935f3a4516a351c39f0cc59654e31932cf627f6c8d",
	                    "EndpointID": "1935dc60e6843e9a728c264add07f993f2cded46b4bc24b2ef189b018819fb64",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-155406 -n ingress-addon-legacy-155406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-155406 -n ingress-addon-legacy-155406: exit status 6 (401.30068ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 15:59:50.774638    6424 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-155406" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-155406" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-155406 addons enable ingress-dns --alsologtostderr -v=5
E1101 16:00:28.018652    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-155406 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.06564987s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 15:59:50.838103    6434 out.go:296] Setting OutFile to fd 1 ...
	I1101 15:59:50.838464    6434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:59:50.838470    6434 out.go:309] Setting ErrFile to fd 2...
	I1101 15:59:50.838474    6434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:59:50.838587    6434 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 15:59:50.860667    6434 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1101 15:59:50.884303    6434 config.go:180] Loaded profile config "ingress-addon-legacy-155406": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1101 15:59:50.884349    6434 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-155406"
	I1101 15:59:50.884367    6434 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-155406"
	I1101 15:59:50.885040    6434 host.go:66] Checking if "ingress-addon-legacy-155406" exists ...
	I1101 15:59:50.885812    6434 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Status}}
	I1101 15:59:50.963844    6434 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1101 15:59:50.985906    6434 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1101 15:59:51.007614    6434 addons.go:345] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 15:59:51.007645    6434 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1101 15:59:51.007783    6434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
	I1101 15:59:51.065353    6434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
	I1101 15:59:51.157569    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 15:59:51.207916    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:51.207937    6434 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:51.484253    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 15:59:51.537622    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:51.537641    6434 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:52.080187    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 15:59:52.133635    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:52.133652    6434 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:52.790464    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 15:59:52.843724    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:52.843737    6434 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:53.637281    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 15:59:53.689245    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:53.689265    6434 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:54.861802    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 15:59:54.913758    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:54.913776    6434 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:57.169107    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 15:59:57.224119    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:57.224135    6434 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:58.837101    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 15:59:58.889570    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 15:59:58.889584    6434 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:01.696177    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 16:00:01.749415    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:01.749429    6434 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:05.576494    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 16:00:05.630305    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:05.630319    6434 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:13.329949    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 16:00:13.381335    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:13.381350    6434 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:28.018655    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 16:00:28.070563    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:28.070578    6434 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:56.476846    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 16:00:56.530307    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:00:56.530327    6434 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:01:19.700437    6434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1101 16:01:19.754112    6434 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1101 16:01:19.776382    6434 out.go:177] 
	W1101 16:01:19.798170    6434 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1101 16:01:19.798203    6434 out.go:239] * 
	* 
	W1101 16:01:19.802142    6434 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 16:01:19.824091    6434 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-155406
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-155406:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef",
	        "Created": "2022-11-01T22:54:19.061504612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 39894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T22:54:19.350349345Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/hostname",
	        "HostsPath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/hosts",
	        "LogPath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef-json.log",
	        "Name": "/ingress-addon-legacy-155406",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-155406:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-155406",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-155406",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-155406/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-155406",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-155406",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-155406",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1d9cc49644cc1dadf2d5ae7edb8050d986225c274b9a3f96eab64839c737fef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50506"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50507"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b1d9cc49644c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-155406": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b438431a5a88",
	                        "ingress-addon-legacy-155406"
	                    ],
	                    "NetworkID": "ba7421a9ce8682d3ba3fd6935f3a4516a351c39f0cc59654e31932cf627f6c8d",
	                    "EndpointID": "1935dc60e6843e9a728c264add07f993f2cded46b4bc24b2ef189b018819fb64",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-155406 -n ingress-addon-legacy-155406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-155406 -n ingress-addon-legacy-155406: exit status 6 (388.514147ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:01:20.287194    6538 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-155406" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-155406" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:159: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-155406
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-155406:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef",
	        "Created": "2022-11-01T22:54:19.061504612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 39894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T22:54:19.350349345Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/hostname",
	        "HostsPath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/hosts",
	        "LogPath": "/var/lib/docker/containers/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef/b438431a5a8806a5efa8b402cab62d714e8bee194c39a30fb2a26aa9c3d1aaef-json.log",
	        "Name": "/ingress-addon-legacy-155406",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-155406:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-155406",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/83b985e163a0b14e32fb38bf18d7c714b25501fc884a47d9adbd38b5da085aa6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-155406",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-155406/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-155406",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-155406",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-155406",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1d9cc49644cc1dadf2d5ae7edb8050d986225c274b9a3f96eab64839c737fef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50506"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50507"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b1d9cc49644c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-155406": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b438431a5a88",
	                        "ingress-addon-legacy-155406"
	                    ],
	                    "NetworkID": "ba7421a9ce8682d3ba3fd6935f3a4516a351c39f0cc59654e31932cf627f6c8d",
	                    "EndpointID": "1935dc60e6843e9a728c264add07f993f2cded46b4bc24b2ef189b018819fb64",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-155406 -n ingress-addon-legacy-155406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-155406 -n ingress-addon-legacy-155406: exit status 6 (387.485043ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:01:20.733311    6550 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-155406" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-155406" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2609758008.exe start -p running-upgrade-162105 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2609758008.exe start -p running-upgrade-162105 --memory=2200 --vm-driver=docker : exit status 70 (33.362405365s)

                                                
                                                
-- stdout --
	* [running-upgrade-162105] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig2547064037
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:21:20.852715927 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-162105" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:21:37.736898961 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-162105", then "minikube start -p running-upgrade-162105 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 25.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.05 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 91.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 129.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 199.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 284.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 346.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 433.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 480.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 521.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:21:37.736898961 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2609758008.exe start -p running-upgrade-162105 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2609758008.exe start -p running-upgrade-162105 --memory=2200 --vm-driver=docker : exit status 70 (4.442054226s)

                                                
                                                
-- stdout --
	* [running-upgrade-162105] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1353914367
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-162105" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2609758008.exe start -p running-upgrade-162105 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2609758008.exe start -p running-upgrade-162105 --memory=2200 --vm-driver=docker : exit status 70 (4.513212235s)

                                                
                                                
-- stdout --
	* [running-upgrade-162105] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig2897459470
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-162105" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2022-11-01 16:21:50.849584 -0700 PDT m=+2260.021661582
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-162105
helpers_test.go:235: (dbg) docker inspect running-upgrade-162105:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "396b6274b6a3ea7f77e15ed0089f644cc32d6aa20738907d4c2fdb409bf2f059",
	        "Created": "2022-11-01T23:21:29.036803959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 144111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:21:29.258743751Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/396b6274b6a3ea7f77e15ed0089f644cc32d6aa20738907d4c2fdb409bf2f059/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/396b6274b6a3ea7f77e15ed0089f644cc32d6aa20738907d4c2fdb409bf2f059/hostname",
	        "HostsPath": "/var/lib/docker/containers/396b6274b6a3ea7f77e15ed0089f644cc32d6aa20738907d4c2fdb409bf2f059/hosts",
	        "LogPath": "/var/lib/docker/containers/396b6274b6a3ea7f77e15ed0089f644cc32d6aa20738907d4c2fdb409bf2f059/396b6274b6a3ea7f77e15ed0089f644cc32d6aa20738907d4c2fdb409bf2f059-json.log",
	        "Name": "/running-upgrade-162105",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-162105:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/08bcba086b3a36e037f665773b13d70df1ddb8703e59a86822ecfa3177db4db0-init/diff:/var/lib/docker/overlay2/9b283dac4f8b00fc66fa708cb4cefffa2996a70f4c229c15a28857048d7fdc88/diff:/var/lib/docker/overlay2/9ff108c965a9e06297091fb60e1521df68ecca05f3bef91142384dd37682e9fd/diff:/var/lib/docker/overlay2/9a55f45d33cbd87c6de64029a4c52052bc6337a9097fd25aa6b36f86fa64babd/diff:/var/lib/docker/overlay2/e1faa3d5318ed4d9ca7612256898e71cf2581d6ac2b41e74a82dcd4cd670685d/diff:/var/lib/docker/overlay2/0a7243b364227f81c6452f48b10ae614c7855e80e3ff85293aefec5b833e7295/diff:/var/lib/docker/overlay2/b2ad7fc463e128ecec363569c0ae8df97d5c4b2f9fdecd40d9775107e72c7db8/diff:/var/lib/docker/overlay2/0e7b2bd1402edaac22f1033f537a746786d9cdca6011c016b530c43c0609d7a0/diff:/var/lib/docker/overlay2/b7e2d4fff4eb761745add37347781b535e1d47ed10c1578bcef4d485ef849dd7/diff:/var/lib/docker/overlay2/300a951ced5e48e6f36334978a28da36fb5c6f2798c1138f2c8d358d3879a393/diff:/var/lib/docker/overlay2/d38191
d177365dae8ededbfc60b2b78c1f808237cb035105450c0fd7be258ac8/diff:/var/lib/docker/overlay2/8033d2d34fac3efba9e541516f559169ffc7b17d8530acb48a984396e4cce761/diff:/var/lib/docker/overlay2/ca5d4ba98f2706cf50fffc0bf9bbd96827d8923c63fce44c0cff3a083dd4d065/diff:/var/lib/docker/overlay2/a343b83f46f7302662a173eb2cf5c44b3f4ef4d53296704d932c198a9fe6b604/diff:/var/lib/docker/overlay2/ebdd14eb9316a922b2d55499a25917e46616991e9c6c31472554485544169f2e/diff:/var/lib/docker/overlay2/e012ab724b9e76a7a06ff5eeb9ab8099e78fc23dc49c8f071596fe0bc00a5818/diff:/var/lib/docker/overlay2/8b031095c98c34d5e370f48cb0c674a4b8f285a5e4fb78c3a76fef2df39bbd45/diff:/var/lib/docker/overlay2/15545188dde4f134f6209204348887681525e1d6f278c58c6f2e06985981fef0/diff:/var/lib/docker/overlay2/15f4ce84eabb3032bd29513036b1cfac1c2ce9f69d4b739926505fc276f48a3a/diff:/var/lib/docker/overlay2/3f1f5f82e85a8089620dfca13ee08df8382bc91b714abb87a4b7b9fef53ae811/diff:/var/lib/docker/overlay2/1b4b066ede35d5a92ced78a2d12583e508425b65997a7014db4f85fd466b28d0/diff:/var/lib/d
ocker/overlay2/8930de7c458b0d48d7dfb70a64fb4e54c4b9ff1db71d4af5c6241ade8dffec63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08bcba086b3a36e037f665773b13d70df1ddb8703e59a86822ecfa3177db4db0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08bcba086b3a36e037f665773b13d70df1ddb8703e59a86822ecfa3177db4db0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08bcba086b3a36e037f665773b13d70df1ddb8703e59a86822ecfa3177db4db0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-162105",
	                "Source": "/var/lib/docker/volumes/running-upgrade-162105/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-162105",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-162105",
	                "name.minikube.sigs.k8s.io": "running-upgrade-162105",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ddba6d2e96447baa51758e1eafa847f1946512d47fe3f8568c39d7a358930e62",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52142"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ddba6d2e9644",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "8b4119b46676e4e6f884a4cda38afc1a5a10716047a8a0bac7ab6e5675a1bfac",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "3a928b6ecab9e52c08e04ca22dcaf610bf12a0b525dfd095dafa40b3be35cf51",
	                    "EndpointID": "8b4119b46676e4e6f884a4cda38afc1a5a10716047a8a0bac7ab6e5675a1bfac",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-162105 -n running-upgrade-162105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-162105 -n running-upgrade-162105: exit status 6 (386.403618ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:21:51.283923   12697 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-162105" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-162105" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-162105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-162105
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-162105: (2.327788918s)
--- FAIL: TestRunningBinaryUpgrade (47.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (566.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-161955 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-161955 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m10.294755589s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-161955] minikube v1.27.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-161955 in cluster kubernetes-upgrade-161955
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 16:19:55.760640   11751 out.go:296] Setting OutFile to fd 1 ...
	I1101 16:19:55.760821   11751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:19:55.760826   11751 out.go:309] Setting ErrFile to fd 2...
	I1101 16:19:55.760830   11751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:19:55.760965   11751 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 16:19:55.761523   11751 out.go:303] Setting JSON to false
	I1101 16:19:55.782804   11751 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2970,"bootTime":1667341825,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 16:19:55.782906   11751 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 16:19:55.804588   11751 out.go:177] * [kubernetes-upgrade-161955] minikube v1.27.1 on Darwin 13.0
	I1101 16:19:55.826587   11751 notify.go:220] Checking for updates...
	I1101 16:19:55.848225   11751 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 16:19:55.912265   11751 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 16:19:55.955677   11751 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 16:19:55.977179   11751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 16:19:56.019500   11751 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 16:19:56.042078   11751 config.go:180] Loaded profile config "missing-upgrade-161859": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1101 16:19:56.042156   11751 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 16:19:56.108444   11751 docker.go:137] docker version: linux-20.10.20
	I1101 16:19:56.108589   11751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:19:56.250745   11751 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:50 SystemTime:2022-11-01 23:19:56.162002111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:19:56.293447   11751 out.go:177] * Using the docker driver based on user configuration
	I1101 16:19:56.314633   11751 start.go:282] selected driver: docker
	I1101 16:19:56.314662   11751 start.go:808] validating driver "docker" against <nil>
	I1101 16:19:56.314697   11751 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 16:19:56.318537   11751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:19:56.460657   11751 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:50 SystemTime:2022-11-01 23:19:56.371725409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:19:56.460787   11751 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1101 16:19:56.460922   11751 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 16:19:56.482630   11751 out.go:177] * Using Docker Desktop driver with root privileges
	I1101 16:19:56.503371   11751 cni.go:95] Creating CNI manager for ""
	I1101 16:19:56.503388   11751 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:19:56.503400   11751 start_flags.go:317] config:
	{Name:kubernetes-upgrade-161955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-161955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:19:56.524352   11751 out.go:177] * Starting control plane node kubernetes-upgrade-161955 in cluster kubernetes-upgrade-161955
	I1101 16:19:56.582416   11751 cache.go:120] Beginning downloading kic base image for docker with docker
	I1101 16:19:56.619411   11751 out.go:177] * Pulling base image ...
	I1101 16:19:56.695499   11751 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 16:19:56.695530   11751 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 16:19:56.695549   11751 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1101 16:19:56.695565   11751 cache.go:57] Caching tarball of preloaded images
	I1101 16:19:56.695701   11751 preload.go:174] Found /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 16:19:56.695709   11751 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1101 16:19:56.696334   11751 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/config.json ...
	I1101 16:19:56.696443   11751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/config.json: {Name:mk3557d39a6ebe7314e655e107f5f477267073cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:19:56.752112   11751 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 16:19:56.752130   11751 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 16:19:56.752139   11751 cache.go:208] Successfully downloaded all kic artifacts
	I1101 16:19:56.752175   11751 start.go:364] acquiring machines lock for kubernetes-upgrade-161955: {Name:mke1e9d0dded8f36a1fb5876974354764da97672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 16:19:56.752339   11751 start.go:368] acquired machines lock for "kubernetes-upgrade-161955" in 151.481µs
	I1101 16:19:56.752374   11751 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-161955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-161955 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 16:19:56.752431   11751 start.go:125] createHost starting for "" (driver="docker")
	I1101 16:19:56.793274   11751 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1101 16:19:56.793537   11751 start.go:159] libmachine.API.Create for "kubernetes-upgrade-161955" (driver="docker")
	I1101 16:19:56.793568   11751 client.go:168] LocalClient.Create starting
	I1101 16:19:56.793657   11751 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem
	I1101 16:19:56.793703   11751 main.go:134] libmachine: Decoding PEM data...
	I1101 16:19:56.793720   11751 main.go:134] libmachine: Parsing certificate...
	I1101 16:19:56.793781   11751 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem
	I1101 16:19:56.793813   11751 main.go:134] libmachine: Decoding PEM data...
	I1101 16:19:56.793823   11751 main.go:134] libmachine: Parsing certificate...
	I1101 16:19:56.794243   11751 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-161955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 16:19:56.850929   11751 cli_runner.go:211] docker network inspect kubernetes-upgrade-161955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 16:19:56.851041   11751 network_create.go:272] running [docker network inspect kubernetes-upgrade-161955] to gather additional debugging logs...
	I1101 16:19:56.851072   11751 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-161955
	W1101 16:19:56.906610   11751 cli_runner.go:211] docker network inspect kubernetes-upgrade-161955 returned with exit code 1
	I1101 16:19:56.906638   11751 network_create.go:275] error running [docker network inspect kubernetes-upgrade-161955]: docker network inspect kubernetes-upgrade-161955: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-161955
	I1101 16:19:56.906654   11751 network_create.go:277] output of [docker network inspect kubernetes-upgrade-161955]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-161955
	
	** /stderr **
	I1101 16:19:56.906759   11751 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 16:19:56.966602   11751 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000df0190] misses:0}
	I1101 16:19:56.966640   11751 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:19:56.966653   11751 network_create.go:115] attempt to create docker network kubernetes-upgrade-161955 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 16:19:56.966763   11751 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-161955 kubernetes-upgrade-161955
	W1101 16:19:57.024512   11751 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-161955 kubernetes-upgrade-161955 returned with exit code 1
	W1101 16:19:57.024570   11751 network_create.go:107] failed to create docker network kubernetes-upgrade-161955 192.168.49.0/24, will retry: subnet is taken
	I1101 16:19:57.024855   11751 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000df0190] amended:false}} dirty:map[] misses:0}
	I1101 16:19:57.024877   11751 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:19:57.025136   11751 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000df0190] amended:true}} dirty:map[192.168.49.0:0xc000df0190 192.168.58.0:0xc00062a308] misses:0}
	I1101 16:19:57.025155   11751 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:19:57.025169   11751 network_create.go:115] attempt to create docker network kubernetes-upgrade-161955 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1101 16:19:57.025273   11751 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-161955 kubernetes-upgrade-161955
	W1101 16:19:57.081477   11751 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-161955 kubernetes-upgrade-161955 returned with exit code 1
	W1101 16:19:57.081514   11751 network_create.go:107] failed to create docker network kubernetes-upgrade-161955 192.168.58.0/24, will retry: subnet is taken
	I1101 16:19:57.081797   11751 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000df0190] amended:true}} dirty:map[192.168.49.0:0xc000df0190 192.168.58.0:0xc00062a308] misses:1}
	I1101 16:19:57.081816   11751 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:19:57.082066   11751 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000df0190] amended:true}} dirty:map[192.168.49.0:0xc000df0190 192.168.58.0:0xc00062a308 192.168.67.0:0xc000b30a18] misses:1}
	I1101 16:19:57.082083   11751 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:19:57.082091   11751 network_create.go:115] attempt to create docker network kubernetes-upgrade-161955 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1101 16:19:57.082186   11751 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-161955 kubernetes-upgrade-161955
	I1101 16:19:57.171087   11751 network_create.go:99] docker network kubernetes-upgrade-161955 192.168.67.0/24 created
	I1101 16:19:57.171132   11751 kic.go:106] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-161955" container
	I1101 16:19:57.171285   11751 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 16:19:57.229814   11751 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-161955 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-161955 --label created_by.minikube.sigs.k8s.io=true
	I1101 16:19:57.287688   11751 oci.go:103] Successfully created a docker volume kubernetes-upgrade-161955
	I1101 16:19:57.287812   11751 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-161955-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-161955 --entrypoint /usr/bin/test -v kubernetes-upgrade-161955:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1101 16:19:57.752112   11751 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-161955
	I1101 16:19:57.752151   11751 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 16:19:57.752165   11751 kic.go:179] Starting extracting preloaded images to volume ...
	I1101 16:19:57.752294   11751 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-161955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 16:20:02.603227   11751 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-161955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.850887343s)
	I1101 16:20:02.603250   11751 kic.go:188] duration metric: took 4.851120 seconds to extract preloaded images to volume
	I1101 16:20:02.603365   11751 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 16:20:02.766331   11751 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-161955 --name kubernetes-upgrade-161955 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-161955 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-161955 --network kubernetes-upgrade-161955 --ip 192.168.67.2 --volume kubernetes-upgrade-161955:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1101 16:20:03.195652   11751 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-161955 --format={{.State.Running}}
	I1101 16:20:03.258288   11751 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-161955 --format={{.State.Status}}
	I1101 16:20:03.329440   11751 cli_runner.go:164] Run: docker exec kubernetes-upgrade-161955 stat /var/lib/dpkg/alternatives/iptables
	I1101 16:20:03.484158   11751 oci.go:144] the created container "kubernetes-upgrade-161955" has a running status.
	I1101 16:20:03.484214   11751 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa...
	I1101 16:20:03.634404   11751 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 16:20:03.739772   11751 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-161955 --format={{.State.Status}}
	I1101 16:20:03.799534   11751 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 16:20:03.799552   11751 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-161955 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 16:20:03.907552   11751 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-161955 --format={{.State.Status}}
	I1101 16:20:03.968835   11751 machine.go:88] provisioning docker machine ...
	I1101 16:20:03.968878   11751 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-161955"
	I1101 16:20:03.968989   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:04.028792   11751 main.go:134] libmachine: Using SSH client type: native
	I1101 16:20:04.029039   11751 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1101 16:20:04.029052   11751 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-161955 && echo "kubernetes-upgrade-161955" | sudo tee /etc/hostname
	I1101 16:20:04.154940   11751 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-161955
	
	I1101 16:20:04.155037   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:04.214642   11751 main.go:134] libmachine: Using SSH client type: native
	I1101 16:20:04.214821   11751 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1101 16:20:04.214840   11751 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-161955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-161955/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-161955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 16:20:04.335703   11751 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 16:20:04.335726   11751 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15232-2108/.minikube CaCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15232-2108/.minikube}
	I1101 16:20:04.335747   11751 ubuntu.go:177] setting up certificates
	I1101 16:20:04.335758   11751 provision.go:83] configureAuth start
	I1101 16:20:04.335854   11751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-161955
	I1101 16:20:04.394229   11751 provision.go:138] copyHostCerts
	I1101 16:20:04.394352   11751 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem, removing ...
	I1101 16:20:04.394361   11751 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
	I1101 16:20:04.394462   11751 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem (1082 bytes)
	I1101 16:20:04.394676   11751 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem, removing ...
	I1101 16:20:04.394682   11751 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
	I1101 16:20:04.394759   11751 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem (1123 bytes)
	I1101 16:20:04.394930   11751 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem, removing ...
	I1101 16:20:04.394936   11751 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
	I1101 16:20:04.394997   11751 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem (1675 bytes)
	I1101 16:20:04.395136   11751 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-161955 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-161955]
	I1101 16:20:04.441915   11751 provision.go:172] copyRemoteCerts
	I1101 16:20:04.441980   11751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 16:20:04.442056   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:04.502693   11751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:20:04.591060   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 16:20:04.609460   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 16:20:04.626884   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 16:20:04.644504   11751 provision.go:86] duration metric: configureAuth took 308.734562ms
	I1101 16:20:04.644529   11751 ubuntu.go:193] setting minikube options for container-runtime
	I1101 16:20:04.644751   11751 config.go:180] Loaded profile config "kubernetes-upgrade-161955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1101 16:20:04.644825   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:04.707037   11751 main.go:134] libmachine: Using SSH client type: native
	I1101 16:20:04.707200   11751 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1101 16:20:04.707216   11751 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 16:20:04.851910   11751 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1101 16:20:04.851922   11751 ubuntu.go:71] root file system type: overlay
	I1101 16:20:04.852434   11751 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 16:20:04.852619   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:04.916651   11751 main.go:134] libmachine: Using SSH client type: native
	I1101 16:20:04.916806   11751 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1101 16:20:04.916860   11751 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 16:20:05.043987   11751 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 16:20:05.044099   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:05.103615   11751 main.go:134] libmachine: Using SSH client type: native
	I1101 16:20:05.103793   11751 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1101 16:20:05.103806   11751 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 16:20:05.709085   11751 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:20:05.049496968 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1101 16:20:05.709115   11751 machine.go:91] provisioned docker machine in 1.740272882s
	I1101 16:20:05.709123   11751 client.go:171] LocalClient.Create took 8.915614643s
	I1101 16:20:05.709142   11751 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-161955" took 8.915669641s
	I1101 16:20:05.709155   11751 start.go:300] post-start starting for "kubernetes-upgrade-161955" (driver="docker")
	I1101 16:20:05.709160   11751 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 16:20:05.709247   11751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 16:20:05.709328   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:05.795783   11751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:20:05.886355   11751 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 16:20:05.889738   11751 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 16:20:05.889754   11751 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 16:20:05.889761   11751 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 16:20:05.889767   11751 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 16:20:05.889777   11751 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/addons for local assets ...
	I1101 16:20:05.889871   11751 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/files for local assets ...
	I1101 16:20:05.890047   11751 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> 34132.pem in /etc/ssl/certs
	I1101 16:20:05.890229   11751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 16:20:05.897135   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:20:05.915925   11751 start.go:303] post-start completed in 206.762673ms
	I1101 16:20:05.916700   11751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-161955
	I1101 16:20:05.976510   11751 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/config.json ...
	I1101 16:20:05.976959   11751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 16:20:05.977023   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:06.037407   11751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:20:06.123549   11751 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 16:20:06.127900   11751 start.go:128] duration metric: createHost completed in 9.375529819s
	I1101 16:20:06.127913   11751 start.go:83] releasing machines lock for "kubernetes-upgrade-161955", held for 9.375634841s
	I1101 16:20:06.128003   11751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-161955
	I1101 16:20:06.187990   11751 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1101 16:20:06.188002   11751 ssh_runner.go:195] Run: systemctl --version
	I1101 16:20:06.188087   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:06.188091   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:06.377267   11751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:20:06.386158   11751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:20:06.461571   11751 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 16:20:06.730928   11751 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1101 16:20:06.731013   11751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 16:20:06.741460   11751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 16:20:06.756073   11751 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 16:20:06.832563   11751 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 16:20:06.899444   11751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 16:20:06.977088   11751 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 16:20:07.204055   11751 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:20:07.240324   11751 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:20:07.315436   11751 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	I1101 16:20:07.315666   11751 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-161955 dig +short host.docker.internal
	I1101 16:20:07.436005   11751 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1101 16:20:07.436127   11751 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1101 16:20:07.441461   11751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 16:20:07.452895   11751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:20:07.514287   11751 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 16:20:07.514396   11751 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:20:07.541816   11751 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1101 16:20:07.541834   11751 docker.go:543] Images already preloaded, skipping extraction
	I1101 16:20:07.541962   11751 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:20:07.566843   11751 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1101 16:20:07.566861   11751 cache_images.go:84] Images are preloaded, skipping loading
	I1101 16:20:07.566963   11751 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 16:20:07.647741   11751 cni.go:95] Creating CNI manager for ""
	I1101 16:20:07.647757   11751 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:20:07.647778   11751 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 16:20:07.647800   11751 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-161955 NodeName:kubernetes-upgrade-161955 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 16:20:07.647954   11751 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-161955"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-161955
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 16:20:07.648046   11751 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-161955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-161955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 16:20:07.648116   11751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1101 16:20:07.656695   11751 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 16:20:07.656765   11751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 16:20:07.665145   11751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I1101 16:20:07.680148   11751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 16:20:07.694305   11751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1101 16:20:07.709179   11751 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1101 16:20:07.713721   11751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 16:20:07.725005   11751 certs.go:54] Setting up /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955 for IP: 192.168.67.2
	I1101 16:20:07.725155   11751 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key
	I1101 16:20:07.725239   11751 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key
	I1101 16:20:07.725291   11751 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.key
	I1101 16:20:07.725314   11751 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.crt with IP's: []
	I1101 16:20:07.913930   11751 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.crt ...
	I1101 16:20:07.913947   11751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.crt: {Name:mkfc7a50a30fc2d7a6b386444bfd8107798b8f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:20:07.914310   11751 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.key ...
	I1101 16:20:07.914320   11751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.key: {Name:mk55df37b976b33dab97cc798c49e4218d692608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:20:07.914556   11751 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.key.c7fa3a9e
	I1101 16:20:07.914579   11751 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 16:20:08.116270   11751 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.crt.c7fa3a9e ...
	I1101 16:20:08.116288   11751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.crt.c7fa3a9e: {Name:mkf82d3d4cb1abda32602baa7732541877d162d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:20:08.116611   11751 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.key.c7fa3a9e ...
	I1101 16:20:08.116620   11751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.key.c7fa3a9e: {Name:mk1a59ceb85267d597deb553db3529e4dc12170c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:20:08.116822   11751 certs.go:320] copying /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.crt
	I1101 16:20:08.117011   11751 certs.go:324] copying /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.key
	I1101 16:20:08.117176   11751 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.key
	I1101 16:20:08.117197   11751 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.crt with IP's: []
	I1101 16:20:08.207238   11751 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.crt ...
	I1101 16:20:08.207253   11751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.crt: {Name:mkebeabf031adf255d848eb3241ea204b7a143c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:20:08.207553   11751 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.key ...
	I1101 16:20:08.207562   11751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.key: {Name:mke2954cb8b163f176192d670ce582d3ae99a885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:20:08.208009   11751 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem (1338 bytes)
	W1101 16:20:08.208061   11751 certs.go:384] ignoring /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413_empty.pem, impossibly tiny 0 bytes
	I1101 16:20:08.208079   11751 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 16:20:08.208116   11751 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem (1082 bytes)
	I1101 16:20:08.208149   11751 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem (1123 bytes)
	I1101 16:20:08.208182   11751 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem (1675 bytes)
	I1101 16:20:08.208258   11751 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:20:08.208800   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 16:20:08.229355   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 16:20:08.249049   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 16:20:08.268766   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 16:20:08.288674   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 16:20:08.306135   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 16:20:08.326313   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 16:20:08.346246   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 16:20:08.365053   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /usr/share/ca-certificates/34132.pem (1708 bytes)
	I1101 16:20:08.382620   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 16:20:08.400396   11751 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem --> /usr/share/ca-certificates/3413.pem (1338 bytes)
	I1101 16:20:08.420955   11751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 16:20:08.435001   11751 ssh_runner.go:195] Run: openssl version
	I1101 16:20:08.441098   11751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34132.pem && ln -fs /usr/share/ca-certificates/34132.pem /etc/ssl/certs/34132.pem"
	I1101 16:20:08.450672   11751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34132.pem
	I1101 16:20:08.455816   11751 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:49 /usr/share/ca-certificates/34132.pem
	I1101 16:20:08.455892   11751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34132.pem
	I1101 16:20:08.462346   11751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34132.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 16:20:08.475469   11751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 16:20:08.484885   11751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:20:08.489747   11751 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:20:08.489816   11751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:20:08.499759   11751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 16:20:08.512311   11751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3413.pem && ln -fs /usr/share/ca-certificates/3413.pem /etc/ssl/certs/3413.pem"
	I1101 16:20:08.522141   11751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3413.pem
	I1101 16:20:08.527722   11751 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:49 /usr/share/ca-certificates/3413.pem
	I1101 16:20:08.527806   11751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3413.pem
	I1101 16:20:08.534646   11751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3413.pem /etc/ssl/certs/51391683.0"
	I1101 16:20:08.545049   11751 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-161955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-161955 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:20:08.545195   11751 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 16:20:08.572477   11751 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 16:20:08.582351   11751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 16:20:08.592628   11751 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 16:20:08.592702   11751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:20:08.600924   11751 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 16:20:08.600955   11751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 16:20:08.669349   11751 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1101 16:20:08.669417   11751 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 16:20:09.046934   11751 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 16:20:09.047133   11751 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 16:20:09.047209   11751 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 16:20:09.344541   11751 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 16:20:09.345616   11751 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 16:20:09.355769   11751 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1101 16:20:09.434219   11751 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 16:20:09.456300   11751 out.go:204]   - Generating certificates and keys ...
	I1101 16:20:09.456399   11751 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 16:20:09.456487   11751 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 16:20:09.537117   11751 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 16:20:09.805272   11751 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1101 16:20:10.082853   11751 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1101 16:20:10.651212   11751 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1101 16:20:10.919983   11751 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1101 16:20:10.920134   11751 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-161955 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1101 16:20:11.044409   11751 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1101 16:20:11.044566   11751 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-161955 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1101 16:20:11.115845   11751 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 16:20:11.322802   11751 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 16:20:11.367516   11751 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1101 16:20:11.367618   11751 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 16:20:11.580242   11751 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 16:20:11.640230   11751 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 16:20:11.778065   11751 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 16:20:11.935844   11751 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 16:20:11.936830   11751 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 16:20:11.960454   11751 out.go:204]   - Booting up control plane ...
	I1101 16:20:11.960644   11751 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 16:20:11.960769   11751 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 16:20:11.960879   11751 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 16:20:11.961011   11751 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 16:20:11.961253   11751 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 16:20:51.917615   11751 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 16:20:51.917956   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:20:51.918177   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:20:56.915517   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:20:56.915715   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:21:06.909372   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:21:06.909517   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:21:26.896902   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:21:26.897109   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:22:06.867852   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:22:06.868060   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:22:06.868071   11751 kubeadm.go:317] 
	I1101 16:22:06.868107   11751 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 16:22:06.868165   11751 kubeadm.go:317] 	timed out waiting for the condition
	I1101 16:22:06.868212   11751 kubeadm.go:317] 
	I1101 16:22:06.868248   11751 kubeadm.go:317] This error is likely caused by:
	I1101 16:22:06.868329   11751 kubeadm.go:317] 	- The kubelet is not running
	I1101 16:22:06.868470   11751 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 16:22:06.868478   11751 kubeadm.go:317] 
	I1101 16:22:06.868591   11751 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 16:22:06.868631   11751 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 16:22:06.868671   11751 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 16:22:06.868675   11751 kubeadm.go:317] 
	I1101 16:22:06.868807   11751 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 16:22:06.868897   11751 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1101 16:22:06.869017   11751 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1101 16:22:06.869055   11751 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1101 16:22:06.869115   11751 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 16:22:06.869180   11751 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1101 16:22:06.872635   11751 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 16:22:06.872737   11751 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1101 16:22:06.872834   11751 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 16:22:06.872902   11751 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 16:22:06.872955   11751 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1101 16:22:06.873137   11751 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-161955 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-161955 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-161955 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-161955 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1101 16:22:06.873167   11751 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1101 16:22:07.296051   11751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 16:22:07.306921   11751 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 16:22:07.307000   11751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:22:07.316362   11751 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 16:22:07.316388   11751 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 16:22:07.534675   11751 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 16:22:07.615469   11751 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1101 16:22:07.695358   11751 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 16:24:03.349359   11751 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 16:24:03.349438   11751 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1101 16:24:03.353370   11751 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1101 16:24:03.353407   11751 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 16:24:03.353474   11751 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 16:24:03.353548   11751 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 16:24:03.353624   11751 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 16:24:03.353704   11751 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 16:24:03.353777   11751 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 16:24:03.353813   11751 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1101 16:24:03.353859   11751 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 16:24:03.374887   11751 out.go:204]   - Generating certificates and keys ...
	I1101 16:24:03.375018   11751 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 16:24:03.375107   11751 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 16:24:03.375235   11751 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 16:24:03.375358   11751 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 16:24:03.375493   11751 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 16:24:03.375584   11751 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 16:24:03.375680   11751 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 16:24:03.375774   11751 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 16:24:03.375890   11751 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 16:24:03.376030   11751 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 16:24:03.376116   11751 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 16:24:03.376208   11751 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 16:24:03.376288   11751 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 16:24:03.376376   11751 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 16:24:03.376462   11751 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 16:24:03.376539   11751 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 16:24:03.376666   11751 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 16:24:03.417713   11751 out.go:204]   - Booting up control plane ...
	I1101 16:24:03.417825   11751 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 16:24:03.417904   11751 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 16:24:03.417960   11751 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 16:24:03.418025   11751 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 16:24:03.418135   11751 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 16:24:03.418167   11751 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 16:24:03.418214   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:24:03.418376   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:24:03.418425   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:24:03.418572   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:24:03.418631   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:24:03.418769   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:24:03.418833   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:24:03.418971   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:24:03.419026   11751 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:24:03.419165   11751 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:24:03.419174   11751 kubeadm.go:317] 
	I1101 16:24:03.419202   11751 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 16:24:03.419234   11751 kubeadm.go:317] 	timed out waiting for the condition
	I1101 16:24:03.419243   11751 kubeadm.go:317] 
	I1101 16:24:03.419273   11751 kubeadm.go:317] This error is likely caused by:
	I1101 16:24:03.419299   11751 kubeadm.go:317] 	- The kubelet is not running
	I1101 16:24:03.419383   11751 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 16:24:03.419393   11751 kubeadm.go:317] 
	I1101 16:24:03.419463   11751 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 16:24:03.419484   11751 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 16:24:03.419516   11751 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 16:24:03.419527   11751 kubeadm.go:317] 
	I1101 16:24:03.419610   11751 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 16:24:03.419686   11751 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1101 16:24:03.419758   11751 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1101 16:24:03.419787   11751 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1101 16:24:03.419842   11751 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 16:24:03.419870   11751 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1101 16:24:03.419891   11751 kubeadm.go:398] StartCluster complete in 3m54.876586345s
	I1101 16:24:03.419988   11751 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:24:03.443480   11751 logs.go:274] 0 containers: []
	W1101 16:24:03.443501   11751 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:24:03.443593   11751 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:24:03.465304   11751 logs.go:274] 0 containers: []
	W1101 16:24:03.465315   11751 logs.go:276] No container was found matching "etcd"
	I1101 16:24:03.465397   11751 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:24:03.487648   11751 logs.go:274] 0 containers: []
	W1101 16:24:03.487659   11751 logs.go:276] No container was found matching "coredns"
	I1101 16:24:03.487752   11751 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:24:03.510277   11751 logs.go:274] 0 containers: []
	W1101 16:24:03.510290   11751 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:24:03.510372   11751 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:24:03.533838   11751 logs.go:274] 0 containers: []
	W1101 16:24:03.533851   11751 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:24:03.533940   11751 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:24:03.557101   11751 logs.go:274] 0 containers: []
	W1101 16:24:03.557114   11751 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:24:03.557227   11751 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:24:03.580665   11751 logs.go:274] 0 containers: []
	W1101 16:24:03.580680   11751 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:24:03.580769   11751 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:24:03.603763   11751 logs.go:274] 0 containers: []
	W1101 16:24:03.603776   11751 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:24:03.603783   11751 logs.go:123] Gathering logs for dmesg ...
	I1101 16:24:03.603790   11751 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:24:03.617768   11751 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:24:03.617784   11751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:24:03.674271   11751 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:24:03.674283   11751 logs.go:123] Gathering logs for Docker ...
	I1101 16:24:03.674290   11751 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:24:03.693407   11751 logs.go:123] Gathering logs for container status ...
	I1101 16:24:03.693422   11751 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:24:05.746341   11751 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052921502s)
	I1101 16:24:05.746458   11751 logs.go:123] Gathering logs for kubelet ...
	I1101 16:24:05.746471   11751 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 16:24:05.786363   11751 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1101 16:24:05.786383   11751 out.go:239] * 
	* 
	W1101 16:24:05.786532   11751 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 16:24:05.786546   11751 out.go:239] * 
	* 
	W1101 16:24:05.787243   11751 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 16:24:05.849317   11751 out.go:177] 
	W1101 16:24:05.891320   11751 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 16:24:05.891445   11751 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1101 16:24:05.891500   11751 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1101 16:24:05.933349   11751 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-161955 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-161955
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-161955: (1.610131106s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-161955 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-161955 status --format={{.Host}}: exit status 7 (129.778519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-161955 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 
E1101 16:24:12.996334    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-161955 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (4m37.718383094s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-161955 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-161955 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-161955 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (557.333242ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-161955] minikube v1.27.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-161955
	    minikube start -p kubernetes-upgrade-161955 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1619552 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-161955 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-161955 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 
E1101 16:28:59.719107    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-161955 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (29.506462285s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-11-01 16:29:15.593158 -0700 PDT m=+2704.768525721
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-161955
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-161955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4337ff2b55ceeab3735f6ab768e8f07b5b595af50efad7e39618b5fc8e863c3",
	        "Created": "2022-11-01T23:20:02.8684489Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 158715,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:24:09.170092568Z",
	            "FinishedAt": "2022-11-01T23:24:06.491679179Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/d4337ff2b55ceeab3735f6ab768e8f07b5b595af50efad7e39618b5fc8e863c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4337ff2b55ceeab3735f6ab768e8f07b5b595af50efad7e39618b5fc8e863c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4337ff2b55ceeab3735f6ab768e8f07b5b595af50efad7e39618b5fc8e863c3/hosts",
	        "LogPath": "/var/lib/docker/containers/d4337ff2b55ceeab3735f6ab768e8f07b5b595af50efad7e39618b5fc8e863c3/d4337ff2b55ceeab3735f6ab768e8f07b5b595af50efad7e39618b5fc8e863c3-json.log",
	        "Name": "/kubernetes-upgrade-161955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-161955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-161955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/491da108f38a39a4333a689e2e968d5013152eaa0e035de49673de06eb3f2bc8-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/491da108f38a39a4333a689e2e968d5013152eaa0e035de49673de06eb3f2bc8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/491da108f38a39a4333a689e2e968d5013152eaa0e035de49673de06eb3f2bc8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/491da108f38a39a4333a689e2e968d5013152eaa0e035de49673de06eb3f2bc8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-161955",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-161955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-161955",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-161955",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-161955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa3d54178f8099d2d8d5894e9a2c566b4b376aaabf0748b27eb819631761f72f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52336"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52337"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52338"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52339"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52340"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fa3d54178f80",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-161955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d4337ff2b55c",
	                        "kubernetes-upgrade-161955"
	                    ],
	                    "NetworkID": "3a4bd193111bd3b9fa879cc60fa4f5ba123a305382b1e2ea5c96f72acec0d5e0",
	                    "EndpointID": "c9e128a8dba1f6eb218474d5536a309f2af1b67342b542bd2bbf36ad9628e1f0",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-161955 -n kubernetes-upgrade-161955
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-161955 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-161955 logs -n 25: (2.821806046s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| profile | list --output json             | minikube                  | jenkins | v1.27.1 | 01 Nov 22 16:23 PDT | 01 Nov 22 16:23 PDT |
	| delete  | -p pause-162153                | pause-162153              | jenkins | v1.27.1 | 01 Nov 22 16:23 PDT | 01 Nov 22 16:23 PDT |
	| start   | -p NoKubernetes-162346         | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:23 PDT |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-162346         | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:23 PDT | 01 Nov 22 16:24 PDT |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-161955   | kubernetes-upgrade-161955 | jenkins | v1.27.1 | 01 Nov 22 16:24 PDT | 01 Nov 22 16:24 PDT |
	| start   | -p kubernetes-upgrade-161955   | kubernetes-upgrade-161955 | jenkins | v1.27.1 | 01 Nov 22 16:24 PDT | 01 Nov 22 16:28 PDT |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-162346         | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:24 PDT | 01 Nov 22 16:24 PDT |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-162346         | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:24 PDT | 01 Nov 22 16:24 PDT |
	| start   | -p NoKubernetes-162346         | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:24 PDT | 01 Nov 22 16:24 PDT |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-162346 sudo    | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:24 PDT |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| profile | list                           | minikube                  | jenkins | v1.27.1 | 01 Nov 22 16:24 PDT | 01 Nov 22 16:24 PDT |
	| profile | list --output=json             | minikube                  | jenkins | v1.27.1 | 01 Nov 22 16:24 PDT | 01 Nov 22 16:25 PDT |
	| stop    | -p NoKubernetes-162346         | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:25 PDT | 01 Nov 22 16:25 PDT |
	| start   | -p NoKubernetes-162346         | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:25 PDT | 01 Nov 22 16:25 PDT |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-162346 sudo    | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:25 PDT |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-162346         | NoKubernetes-162346       | jenkins | v1.27.1 | 01 Nov 22 16:25 PDT | 01 Nov 22 16:25 PDT |
	| start   | -p force-systemd-flag-162516   | force-systemd-flag-162516 | jenkins | v1.27.1 | 01 Nov 22 16:25 PDT | 01 Nov 22 16:25 PDT |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-162516      | force-systemd-flag-162516 | jenkins | v1.27.1 | 01 Nov 22 16:25 PDT | 01 Nov 22 16:25 PDT |
	|         | ssh docker info --format       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-162516   | force-systemd-flag-162516 | jenkins | v1.27.1 | 01 Nov 22 16:25 PDT | 01 Nov 22 16:25 PDT |
	| start   | -p force-systemd-env-162615    | force-systemd-env-162615  | jenkins | v1.27.1 | 01 Nov 22 16:26 PDT | 01 Nov 22 16:26 PDT |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-162615       | force-systemd-env-162615  | jenkins | v1.27.1 | 01 Nov 22 16:26 PDT | 01 Nov 22 16:26 PDT |
	|         | ssh docker info --format       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-162615    | force-systemd-env-162615  | jenkins | v1.27.1 | 01 Nov 22 16:26 PDT | 01 Nov 22 16:26 PDT |
	| start   | -p cert-expiration-162646      | cert-expiration-162646    | jenkins | v1.27.1 | 01 Nov 22 16:26 PDT | 01 Nov 22 16:27 PDT |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-161955   | kubernetes-upgrade-161955 | jenkins | v1.27.1 | 01 Nov 22 16:28 PDT |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-161955   | kubernetes-upgrade-161955 | jenkins | v1.27.1 | 01 Nov 22 16:28 PDT | 01 Nov 22 16:29 PDT |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 16:28:46
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 16:28:46.141305   14501 out.go:296] Setting OutFile to fd 1 ...
	I1101 16:28:46.141478   14501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:28:46.141484   14501 out.go:309] Setting ErrFile to fd 2...
	I1101 16:28:46.141488   14501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:28:46.141612   14501 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 16:28:46.142962   14501 out.go:303] Setting JSON to false
	I1101 16:28:46.162203   14501 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3501,"bootTime":1667341825,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 16:28:46.162321   14501 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 16:28:46.184147   14501 out.go:177] * [kubernetes-upgrade-161955] minikube v1.27.1 on Darwin 13.0
	I1101 16:28:46.220824   14501 notify.go:220] Checking for updates...
	I1101 16:28:46.257832   14501 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 16:28:46.301586   14501 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 16:28:46.342725   14501 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 16:28:46.401043   14501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 16:28:46.462021   14501 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 16:28:46.483736   14501 config.go:180] Loaded profile config "kubernetes-upgrade-161955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 16:28:46.484404   14501 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 16:28:46.546785   14501 docker.go:137] docker version: linux-20.10.20
	I1101 16:28:46.546940   14501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:28:46.692276   14501 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:57 SystemTime:2022-11-01 23:28:46.61718358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:28:46.750916   14501 out.go:177] * Using the docker driver based on existing profile
	I1101 16:28:46.772020   14501 start.go:282] selected driver: docker
	I1101 16:28:46.772050   14501 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-161955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-161955 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:28:46.772238   14501 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 16:28:46.775489   14501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:28:46.920530   14501 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:57 SystemTime:2022-11-01 23:28:46.844416621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:28:46.920687   14501 cni.go:95] Creating CNI manager for ""
	I1101 16:28:46.920700   14501 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:28:46.920716   14501 start_flags.go:317] config:
	{Name:kubernetes-upgrade-161955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-161955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmn
et/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:28:46.963205   14501 out.go:177] * Starting control plane node kubernetes-upgrade-161955 in cluster kubernetes-upgrade-161955
	I1101 16:28:46.984319   14501 cache.go:120] Beginning downloading kic base image for docker with docker
	I1101 16:28:47.006131   14501 out.go:177] * Pulling base image ...
	I1101 16:28:47.048381   14501 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1101 16:28:47.048388   14501 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 16:28:47.048486   14501 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1101 16:28:47.048502   14501 cache.go:57] Caching tarball of preloaded images
	I1101 16:28:47.048712   14501 preload.go:174] Found /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 16:28:47.048729   14501 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1101 16:28:47.049655   14501 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/config.json ...
	I1101 16:28:47.106975   14501 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 16:28:47.106996   14501 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 16:28:47.107009   14501 cache.go:208] Successfully downloaded all kic artifacts
	I1101 16:28:47.107058   14501 start.go:364] acquiring machines lock for kubernetes-upgrade-161955: {Name:mke1e9d0dded8f36a1fb5876974354764da97672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 16:28:47.107162   14501 start.go:368] acquired machines lock for "kubernetes-upgrade-161955" in 82.472µs
	I1101 16:28:47.107190   14501 start.go:96] Skipping create...Using existing machine configuration
	I1101 16:28:47.107201   14501 fix.go:55] fixHost starting: 
	I1101 16:28:47.107466   14501 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-161955 --format={{.State.Status}}
	I1101 16:28:47.165738   14501 fix.go:103] recreateIfNeeded on kubernetes-upgrade-161955: state=Running err=<nil>
	W1101 16:28:47.165766   14501 fix.go:129] unexpected machine state, will restart: <nil>
	I1101 16:28:47.187603   14501 out.go:177] * Updating the running docker "kubernetes-upgrade-161955" container ...
	I1101 16:28:47.262556   14501 machine.go:88] provisioning docker machine ...
	I1101 16:28:47.262611   14501 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-161955"
	I1101 16:28:47.262784   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:47.323252   14501 main.go:134] libmachine: Using SSH client type: native
	I1101 16:28:47.323460   14501 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52336 <nil> <nil>}
	I1101 16:28:47.323472   14501 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-161955 && echo "kubernetes-upgrade-161955" | sudo tee /etc/hostname
	I1101 16:28:47.445868   14501 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-161955
	
	I1101 16:28:47.445984   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:47.505466   14501 main.go:134] libmachine: Using SSH client type: native
	I1101 16:28:47.505664   14501 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52336 <nil> <nil>}
	I1101 16:28:47.505686   14501 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-161955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-161955/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-161955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 16:28:47.622745   14501 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 16:28:47.622772   14501 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15232-2108/.minikube CaCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15232-2108/.minikube}
	I1101 16:28:47.622792   14501 ubuntu.go:177] setting up certificates
	I1101 16:28:47.622807   14501 provision.go:83] configureAuth start
	I1101 16:28:47.622899   14501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-161955
	I1101 16:28:47.683615   14501 provision.go:138] copyHostCerts
	I1101 16:28:47.683723   14501 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem, removing ...
	I1101 16:28:47.683732   14501 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
	I1101 16:28:47.683837   14501 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem (1082 bytes)
	I1101 16:28:47.684034   14501 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem, removing ...
	I1101 16:28:47.684040   14501 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
	I1101 16:28:47.684104   14501 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem (1123 bytes)
	I1101 16:28:47.684244   14501 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem, removing ...
	I1101 16:28:47.684251   14501 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
	I1101 16:28:47.684312   14501 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem (1675 bytes)
	I1101 16:28:47.684431   14501 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-161955 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-161955]
	I1101 16:28:47.740161   14501 provision.go:172] copyRemoteCerts
	I1101 16:28:47.740230   14501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 16:28:47.740305   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:47.841458   14501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52336 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:28:47.927155   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 16:28:47.944267   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 16:28:47.962720   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 16:28:47.980445   14501 provision.go:86] duration metric: configureAuth took 357.627621ms
	I1101 16:28:47.980458   14501 ubuntu.go:193] setting minikube options for container-runtime
	I1101 16:28:47.980625   14501 config.go:180] Loaded profile config "kubernetes-upgrade-161955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 16:28:47.980718   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:48.041680   14501 main.go:134] libmachine: Using SSH client type: native
	I1101 16:28:48.041836   14501 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52336 <nil> <nil>}
	I1101 16:28:48.041845   14501 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 16:28:48.158568   14501 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1101 16:28:48.158579   14501 ubuntu.go:71] root file system type: overlay
	I1101 16:28:48.158753   14501 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 16:28:48.158855   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:48.217471   14501 main.go:134] libmachine: Using SSH client type: native
	I1101 16:28:48.217635   14501 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52336 <nil> <nil>}
	I1101 16:28:48.217692   14501 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 16:28:48.341722   14501 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 16:28:48.341891   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:48.402835   14501 main.go:134] libmachine: Using SSH client type: native
	I1101 16:28:48.402991   14501 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52336 <nil> <nil>}
	I1101 16:28:48.403003   14501 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 16:28:48.525094   14501 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 16:28:48.525110   14501 machine.go:91] provisioned docker machine in 1.262544351s
	I1101 16:28:48.525121   14501 start.go:300] post-start starting for "kubernetes-upgrade-161955" (driver="docker")
	I1101 16:28:48.525126   14501 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 16:28:48.525196   14501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 16:28:48.525256   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:48.585668   14501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52336 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:28:48.672713   14501 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 16:28:48.676556   14501 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 16:28:48.676572   14501 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 16:28:48.676579   14501 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 16:28:48.676584   14501 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 16:28:48.676592   14501 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/addons for local assets ...
	I1101 16:28:48.676688   14501 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/files for local assets ...
	I1101 16:28:48.676876   14501 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> 34132.pem in /etc/ssl/certs
	I1101 16:28:48.677057   14501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 16:28:48.684336   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:28:48.703171   14501 start.go:303] post-start completed in 178.029894ms
	I1101 16:28:48.703286   14501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 16:28:48.703373   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:48.763468   14501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52336 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:28:48.846241   14501 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 16:28:48.850672   14501 fix.go:57] fixHost completed within 1.743482958s
	I1101 16:28:48.850685   14501 start.go:83] releasing machines lock for "kubernetes-upgrade-161955", held for 1.743528893s
	I1101 16:28:48.850798   14501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-161955
	I1101 16:28:48.912705   14501 ssh_runner.go:195] Run: systemctl --version
	I1101 16:28:48.912706   14501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 16:28:48.912795   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:48.912797   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:48.982690   14501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52336 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:28:48.982882   14501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52336 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:28:49.065820   14501 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 16:28:49.143081   14501 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1101 16:28:49.143164   14501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 16:28:49.153668   14501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 16:28:49.166800   14501 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 16:28:49.248344   14501 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 16:28:49.332588   14501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 16:28:49.421421   14501 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 16:28:51.397261   14501 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.975822553s)
	I1101 16:28:51.397373   14501 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 16:28:51.471174   14501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 16:28:51.552008   14501 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1101 16:28:51.562185   14501 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 16:28:51.562271   14501 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 16:28:51.566210   14501 start.go:472] Will wait 60s for crictl version
	I1101 16:28:51.566264   14501 ssh_runner.go:195] Run: sudo crictl version
	I1101 16:28:51.598152   14501 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1101 16:28:51.598256   14501 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:28:51.629154   14501 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:28:51.701976   14501 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1101 16:28:51.702144   14501 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-161955 dig +short host.docker.internal
	I1101 16:28:51.823788   14501 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1101 16:28:51.823909   14501 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1101 16:28:51.829337   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:51.903466   14501 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1101 16:28:51.903557   14501 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:28:51.968789   14501 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1101 16:28:51.968806   14501 docker.go:543] Images already preloaded, skipping extraction
	I1101 16:28:51.968917   14501 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:28:52.050042   14501 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1101 16:28:52.050065   14501 cache_images.go:84] Images are preloaded, skipping loading
	I1101 16:28:52.050219   14501 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 16:28:52.178346   14501 cni.go:95] Creating CNI manager for ""
	I1101 16:28:52.178365   14501 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:28:52.178385   14501 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 16:28:52.178419   14501 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-161955 NodeName:kubernetes-upgrade-161955 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 16:28:52.178579   14501 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-161955"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 16:28:52.178681   14501 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-161955 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-161955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 16:28:52.178760   14501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1101 16:28:52.190311   14501 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 16:28:52.190401   14501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 16:28:52.226769   14501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
	I1101 16:28:52.243543   14501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 16:28:52.261958   14501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I1101 16:28:52.280042   14501 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1101 16:28:52.327700   14501 certs.go:54] Setting up /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955 for IP: 192.168.67.2
	I1101 16:28:52.327839   14501 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key
	I1101 16:28:52.327916   14501 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key
	I1101 16:28:52.328033   14501 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.key
	I1101 16:28:52.328130   14501 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.key.c7fa3a9e
	I1101 16:28:52.328198   14501 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.key
	I1101 16:28:52.328502   14501 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem (1338 bytes)
	W1101 16:28:52.328545   14501 certs.go:384] ignoring /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413_empty.pem, impossibly tiny 0 bytes
	I1101 16:28:52.328558   14501 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 16:28:52.328603   14501 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem (1082 bytes)
	I1101 16:28:52.328643   14501 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem (1123 bytes)
	I1101 16:28:52.328675   14501 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem (1675 bytes)
	I1101 16:28:52.328781   14501 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:28:52.329525   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 16:28:52.357107   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 16:28:52.382780   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 16:28:52.434278   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 16:28:52.453437   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 16:28:52.475729   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 16:28:52.531081   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 16:28:52.549827   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 16:28:52.569053   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /usr/share/ca-certificates/34132.pem (1708 bytes)
	I1101 16:28:52.624394   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 16:28:52.643832   14501 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem --> /usr/share/ca-certificates/3413.pem (1338 bytes)
	I1101 16:28:52.665241   14501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 16:28:52.678962   14501 ssh_runner.go:195] Run: openssl version
	I1101 16:28:52.684511   14501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34132.pem && ln -fs /usr/share/ca-certificates/34132.pem /etc/ssl/certs/34132.pem"
	I1101 16:28:52.725456   14501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34132.pem
	I1101 16:28:52.730087   14501 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:49 /usr/share/ca-certificates/34132.pem
	I1101 16:28:52.730145   14501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34132.pem
	I1101 16:28:52.736262   14501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34132.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 16:28:52.744098   14501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 16:28:52.755941   14501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:28:52.770578   14501 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:28:52.770653   14501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:28:52.783149   14501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 16:28:52.827225   14501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3413.pem && ln -fs /usr/share/ca-certificates/3413.pem /etc/ssl/certs/3413.pem"
	I1101 16:28:52.837321   14501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3413.pem
	I1101 16:28:52.842075   14501 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:49 /usr/share/ca-certificates/3413.pem
	I1101 16:28:52.842138   14501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3413.pem
	I1101 16:28:52.849594   14501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3413.pem /etc/ssl/certs/51391683.0"
	I1101 16:28:52.859480   14501 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-161955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-161955 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:28:52.859636   14501 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 16:28:52.885403   14501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 16:28:52.929657   14501 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1101 16:28:52.929676   14501 kubeadm.go:627] restartCluster start
	I1101 16:28:52.929786   14501 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 16:28:52.938186   14501 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:28:52.938275   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:28:53.003940   14501 kubeconfig.go:92] found "kubernetes-upgrade-161955" server: "https://127.0.0.1:52340"
	I1101 16:28:53.004794   14501 kapi.go:59] client config for kubernetes-upgrade-161955: &rest.Config{Host:"https://127.0.0.1:52340", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.key", CAFile:"/Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345860), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 16:28:53.005404   14501 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 16:28:53.014568   14501 api_server.go:165] Checking apiserver status ...
	I1101 16:28:53.014647   14501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:28:53.029149   14501 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12691/cgroup
	W1101 16:28:53.037529   14501 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12691/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:28:53.037606   14501 ssh_runner.go:195] Run: ls
	I1101 16:28:53.042109   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:28:58.043484   14501 api_server.go:268] stopped: https://127.0.0.1:52340/healthz: Get "https://127.0.0.1:52340/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 16:28:58.043532   14501 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I1101 16:28:58.306770   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:03.307502   14501 api_server.go:268] stopped: https://127.0.0.1:52340/healthz: Get "https://127.0.0.1:52340/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 16:29:03.307554   14501 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I1101 16:29:03.689012   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:04.735990   14501 api_server.go:278] https://127.0.0.1:52340/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:29:04.736021   14501 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:52340/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:29:05.158965   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:05.166399   14501 api_server.go:278] https://127.0.0.1:52340/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:29:05.166425   14501 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:52340/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:29:05.639619   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:05.646467   14501 api_server.go:278] https://127.0.0.1:52340/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:29:05.646485   14501 retry.go:31] will retry after 587.352751ms: https://127.0.0.1:52340/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:29:06.234454   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:06.240101   14501 api_server.go:278] https://127.0.0.1:52340/healthz returned 200:
	ok
	I1101 16:29:06.252333   14501 system_pods.go:86] 5 kube-system pods found
	I1101 16:29:06.252349   14501 system_pods.go:89] "etcd-kubernetes-upgrade-161955" [7df142da-512b-4490-9e18-428475d935a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 16:29:06.252355   14501 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-161955" [78ee595a-b751-44ac-bcfe-3e5d6c1e7699] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 16:29:06.252364   14501 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-161955" [f57ad3d8-6e2a-40a9-ace9-f81768c74e76] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 16:29:06.252371   14501 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-161955" [d1042725-c1fc-4bbc-b65c-4cdf26bbc632] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 16:29:06.252378   14501 system_pods.go:89] "storage-provisioner" [bed5ce4b-eb89-49c2-9b0e-7ccc41e5edd3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 16:29:06.252384   14501 kubeadm.go:611] needs reconfigure: missing components: kube-dns, kube-proxy
	I1101 16:29:06.252389   14501 kubeadm.go:1114] stopping kube-system containers ...
	I1101 16:29:06.252467   14501 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 16:29:06.277134   14501 docker.go:444] Stopping containers: [2f637ba60e82 ab896f0b389b 011298a92bb2 6d856ae1bb2e abaad9f545b4 e9ee41cc9519 1bc03548478a fb00e53c9b63 6e6e1100bb42 ef9d63a17b8c 6f336eaf5f1b d61beae38c3d 6156d25fe21f d3b7c7d94882 3815952d4100 376eabb9ba5e bcaac49a8263 e002e72cb57c 0f8bcef61a4c]
	I1101 16:29:06.277232   14501 ssh_runner.go:195] Run: docker stop 2f637ba60e82 ab896f0b389b 011298a92bb2 6d856ae1bb2e abaad9f545b4 e9ee41cc9519 1bc03548478a fb00e53c9b63 6e6e1100bb42 ef9d63a17b8c 6f336eaf5f1b d61beae38c3d 6156d25fe21f d3b7c7d94882 3815952d4100 376eabb9ba5e bcaac49a8263 e002e72cb57c 0f8bcef61a4c
	I1101 16:29:07.136616   14501 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 16:29:07.213918   14501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:29:07.222677   14501 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  1 23:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  1 23:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Nov  1 23:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  1 23:28 /etc/kubernetes/scheduler.conf
	
	I1101 16:29:07.222771   14501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 16:29:07.231187   14501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 16:29:07.239115   14501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 16:29:07.246855   14501 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:29:07.246925   14501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 16:29:07.254739   14501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 16:29:07.264108   14501 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:29:07.264193   14501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 16:29:07.272005   14501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 16:29:07.280875   14501 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 16:29:07.280899   14501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:29:07.332400   14501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:29:07.937785   14501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:29:08.078778   14501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:29:08.132270   14501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:29:08.233534   14501 api_server.go:51] waiting for apiserver process to appear ...
	I1101 16:29:08.233602   14501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:29:08.745356   14501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:29:09.246062   14501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:29:09.257652   14501 api_server.go:71] duration metric: took 1.024126922s to wait for apiserver process to appear ...
	I1101 16:29:09.257665   14501 api_server.go:87] waiting for apiserver healthz status ...
	I1101 16:29:09.257677   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:12.655701   14501 api_server.go:278] https://127.0.0.1:52340/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 16:29:12.655722   14501 api_server.go:102] status: https://127.0.0.1:52340/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 16:29:13.156511   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:13.163793   14501 api_server.go:278] https://127.0.0.1:52340/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 16:29:13.163806   14501 api_server.go:102] status: https://127.0.0.1:52340/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:29:13.657033   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:13.663579   14501 api_server.go:278] https://127.0.0.1:52340/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 16:29:13.663600   14501 api_server.go:102] status: https://127.0.0.1:52340/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:29:14.157533   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:14.165494   14501 api_server.go:278] https://127.0.0.1:52340/healthz returned 200:
	ok
	I1101 16:29:14.171925   14501 api_server.go:140] control plane version: v1.25.3
	I1101 16:29:14.171936   14501 api_server.go:130] duration metric: took 4.914303271s to wait for apiserver health ...
	I1101 16:29:14.171945   14501 cni.go:95] Creating CNI manager for ""
	I1101 16:29:14.171950   14501 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:29:14.171956   14501 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 16:29:14.176461   14501 system_pods.go:59] 5 kube-system pods found
	I1101 16:29:14.176474   14501 system_pods.go:61] "etcd-kubernetes-upgrade-161955" [7df142da-512b-4490-9e18-428475d935a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 16:29:14.176484   14501 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-161955" [78ee595a-b751-44ac-bcfe-3e5d6c1e7699] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 16:29:14.176496   14501 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-161955" [f57ad3d8-6e2a-40a9-ace9-f81768c74e76] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 16:29:14.176504   14501 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-161955" [d1042725-c1fc-4bbc-b65c-4cdf26bbc632] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 16:29:14.176508   14501 system_pods.go:61] "storage-provisioner" [bed5ce4b-eb89-49c2-9b0e-7ccc41e5edd3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 16:29:14.176513   14501 system_pods.go:74] duration metric: took 4.552937ms to wait for pod list to return data ...
	I1101 16:29:14.176517   14501 node_conditions.go:102] verifying NodePressure condition ...
	I1101 16:29:14.179352   14501 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I1101 16:29:14.179367   14501 node_conditions.go:123] node cpu capacity is 6
	I1101 16:29:14.179377   14501 node_conditions.go:105] duration metric: took 2.856878ms to run NodePressure ...
	I1101 16:29:14.179387   14501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:29:14.292442   14501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 16:29:14.299607   14501 ops.go:34] apiserver oom_adj: -16
	I1101 16:29:14.299620   14501 kubeadm.go:631] restartCluster took 21.370092759s
	I1101 16:29:14.299628   14501 kubeadm.go:398] StartCluster complete in 21.440332531s
	I1101 16:29:14.299641   14501 settings.go:142] acquiring lock: {Name:mkdb6df16d9cd02d82e4a95348c412b3d2076fed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:29:14.299729   14501 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 16:29:14.300439   14501 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/kubeconfig: {Name:mka869f80d5e962d9ffa24675c3f5e3e0593fcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:29:14.301910   14501 kapi.go:59] client config for kubernetes-upgrade-161955: &rest.Config{Host:"https://127.0.0.1:52340", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.key", CAFile:"/Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345860), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 16:29:14.304998   14501 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-161955" rescaled to 1
	I1101 16:29:14.305035   14501 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 16:29:14.305055   14501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 16:29:14.305081   14501 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I1101 16:29:14.305214   14501 config.go:180] Loaded profile config "kubernetes-upgrade-161955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 16:29:14.349046   14501 out.go:177] * Verifying Kubernetes components...
	I1101 16:29:14.349183   14501 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-161955"
	I1101 16:29:14.349182   14501 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-161955"
	I1101 16:29:14.370179   14501 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-161955"
	W1101 16:29:14.370189   14501 addons.go:162] addon storage-provisioner should already be in state true
	I1101 16:29:14.370201   14501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-161955"
	I1101 16:29:14.370206   14501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 16:29:14.370239   14501 host.go:66] Checking if "kubernetes-upgrade-161955" exists ...
	I1101 16:29:14.370633   14501 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-161955 --format={{.State.Status}}
	I1101 16:29:14.370648   14501 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-161955 --format={{.State.Status}}
	I1101 16:29:14.374464   14501 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1101 16:29:14.384621   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:29:14.445573   14501 kapi.go:59] client config for kubernetes-upgrade-161955: &rest.Config{Host:"https://127.0.0.1:52340", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubernetes-upgrade-161955/client.key", CAFile:"/Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345860), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 16:29:14.466501   14501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 16:29:14.473143   14501 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-161955"
	W1101 16:29:14.487065   14501 addons.go:162] addon default-storageclass should already be in state true
	I1101 16:29:14.487120   14501 host.go:66] Checking if "kubernetes-upgrade-161955" exists ...
	I1101 16:29:14.487128   14501 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 16:29:14.487139   14501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 16:29:14.487235   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:29:14.488588   14501 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-161955 --format={{.State.Status}}
	I1101 16:29:14.495642   14501 api_server.go:51] waiting for apiserver process to appear ...
	I1101 16:29:14.495747   14501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:29:14.508264   14501 api_server.go:71] duration metric: took 203.204812ms to wait for apiserver process to appear ...
	I1101 16:29:14.508285   14501 api_server.go:87] waiting for apiserver healthz status ...
	I1101 16:29:14.508302   14501 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52340/healthz ...
	I1101 16:29:14.514828   14501 api_server.go:278] https://127.0.0.1:52340/healthz returned 200:
	ok
	I1101 16:29:14.516749   14501 api_server.go:140] control plane version: v1.25.3
	I1101 16:29:14.516758   14501 api_server.go:130] duration metric: took 8.468032ms to wait for apiserver health ...
	I1101 16:29:14.516763   14501 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 16:29:14.521446   14501 system_pods.go:59] 5 kube-system pods found
	I1101 16:29:14.521463   14501 system_pods.go:61] "etcd-kubernetes-upgrade-161955" [7df142da-512b-4490-9e18-428475d935a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 16:29:14.521475   14501 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-161955" [78ee595a-b751-44ac-bcfe-3e5d6c1e7699] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 16:29:14.521494   14501 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-161955" [f57ad3d8-6e2a-40a9-ace9-f81768c74e76] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 16:29:14.521501   14501 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-161955" [d1042725-c1fc-4bbc-b65c-4cdf26bbc632] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 16:29:14.521506   14501 system_pods.go:61] "storage-provisioner" [bed5ce4b-eb89-49c2-9b0e-7ccc41e5edd3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 16:29:14.521510   14501 system_pods.go:74] duration metric: took 4.743332ms to wait for pod list to return data ...
	I1101 16:29:14.521517   14501 kubeadm.go:573] duration metric: took 216.461855ms to wait for : map[apiserver:true system_pods:true] ...
	I1101 16:29:14.521526   14501 node_conditions.go:102] verifying NodePressure condition ...
	I1101 16:29:14.525903   14501 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I1101 16:29:14.525916   14501 node_conditions.go:123] node cpu capacity is 6
	I1101 16:29:14.525926   14501 node_conditions.go:105] duration metric: took 4.396115ms to run NodePressure ...
	I1101 16:29:14.525933   14501 start.go:217] waiting for startup goroutines ...
	I1101 16:29:14.556697   14501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52336 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:29:14.556743   14501 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 16:29:14.556752   14501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 16:29:14.556827   14501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-161955
	I1101 16:29:14.618391   14501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52336 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/kubernetes-upgrade-161955/id_rsa Username:docker}
	I1101 16:29:14.656902   14501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 16:29:14.722117   14501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 16:29:15.369624   14501 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1101 16:29:15.443420   14501 addons.go:414] enableAddons completed in 1.138358032s
	I1101 16:29:15.443859   14501 ssh_runner.go:195] Run: rm -f paused
	I1101 16:29:15.485111   14501 start.go:506] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I1101 16:29:15.506335   14501 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-161955" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-11-01 23:24:09 UTC, end at Tue 2022-11-01 23:29:16 UTC. --
	Nov 01 23:28:50 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:50.967974105Z" level=info msg="ignoring event" container=ef9d63a17b8cc24f7ba6da7d4de92bb6c1837841e59132cb6d580e97d909e680 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:28:50 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:50.968246186Z" level=info msg="ignoring event" container=6156d25fe21feffdcbf9e8a1644b9f98c01788a9ca0f02b390372147029e3121 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:28:50 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:50.971110343Z" level=info msg="ignoring event" container=d61beae38c3d13473cd3bbca6228bdb0e6d8a6933abe0aa74e78197c5b271557 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:28:50 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:50.973982496Z" level=info msg="ignoring event" container=6f336eaf5f1b1c0e4e5ab9838aec1512377a49f1b83ccdba0b3d94cadb31e4f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.147576350Z" level=info msg="Removing stale sandbox 29587a20c8e79893d4c7a3993ec55522dd991b061066a8ab306501028b33586e (6156d25fe21feffdcbf9e8a1644b9f98c01788a9ca0f02b390372147029e3121)"
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.148974007Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b964bf2ab5eb4f2db7da9aa15ac3117d2faa2dc5eeed65901bc29fe9c9d65337 c66842ba1f7d1fa63974fa593f714f2170b59461bc759b77cfff0ff7289e0087], retrying...."
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.234782720Z" level=info msg="Removing stale sandbox 99730c98fd0e3a27eb2c40b75557c0df2588956b8df28b2943bc30087a1ce434 (6e6e1100bb422bc30e8e2987d94d0ca5d9de1a16d26c1e0d0739070d2c557088)"
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.236414433Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b964bf2ab5eb4f2db7da9aa15ac3117d2faa2dc5eeed65901bc29fe9c9d65337 7b6b8f3dca608c3430c8ada53bf485b5141b19535941e1d62c69513997338821], retrying...."
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.315550269Z" level=info msg="Removing stale sandbox 9c81586baf7fae41e0e1851eed55b3fe8c35c0a8b3d9e29ae32af6d558fe25f0 (d61beae38c3d13473cd3bbca6228bdb0e6d8a6933abe0aa74e78197c5b271557)"
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.316939893Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b964bf2ab5eb4f2db7da9aa15ac3117d2faa2dc5eeed65901bc29fe9c9d65337 2fcddcaa2b28f703132a15bb241874e6479e38a45a1cd30ac0ab04e5e5fc56c7], retrying...."
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.339892023Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.384965260Z" level=info msg="Loading containers: done."
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.398492967Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.398560160Z" level=info msg="Daemon has completed initialization"
	Nov 01 23:28:51 kubernetes-upgrade-161955 systemd[1]: Started Docker Application Container Engine.
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.421540030Z" level=info msg="API listen on [::]:2376"
	Nov 01 23:28:51 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:28:51.428610693Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 01 23:29:06 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:29:06.363783525Z" level=info msg="ignoring event" container=e9ee41cc9519ef9b24ca1f83df4336db59a959581802b5a3af16d9a62da429e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:29:06 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:29:06.426512505Z" level=info msg="ignoring event" container=1bc03548478aca62a26d46e382ce22f47feda87ed29b4e38b00f154c59f8e44f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:29:06 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:29:06.428770197Z" level=info msg="ignoring event" container=6d856ae1bb2e1d8d9b3b64f8ea8312f5225c71a1a3934b297f84f02aa32c84da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:29:06 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:29:06.434855002Z" level=info msg="ignoring event" container=ab896f0b389b9256c9daa3fa72e276a19d68bb92b9a21793a2a307052a6d24b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:29:06 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:29:06.436041864Z" level=info msg="ignoring event" container=2f637ba60e821cac37878fd46ee84ce6ebbdc2d3dc97b75c840036439a08b781 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:29:06 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:29:06.436082386Z" level=info msg="ignoring event" container=fb00e53c9b63c72f03368dc772620fca0feeea552e9a25ce5968ae8df93cb2f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:29:06 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:29:06.442514684Z" level=info msg="ignoring event" container=abaad9f545b481be54d165112cded232c97bea8bdec980cd3b2e13c57fc3eb10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 01 23:29:07 kubernetes-upgrade-161955 dockerd[12003]: time="2022-11-01T23:29:07.072428016Z" level=info msg="ignoring event" container=011298a92bb2d1d1388491d2fc9bf77b6cae9bb8731c798a4a60462fcbf90a4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e4647665a971c       a8a176a5d5d69       8 seconds ago       Running             etcd                      3                   7ced59104da9e
	e57ac10afd6c8       0346dbd74bcb9       9 seconds ago       Running             kube-apiserver            2                   196d882c9baaa
	123bdef3b1205       6d23ec0e8b87e       9 seconds ago       Running             kube-scheduler            3                   9efe4bdd0e432
	adfcb5729ed6b       6039992312758       9 seconds ago       Running             kube-controller-manager   2                   b1f49da4894f6
	2f637ba60e821       6d23ec0e8b87e       16 seconds ago      Exited              kube-scheduler            2                   1bc03548478ac
	ab896f0b389b9       a8a176a5d5d69       16 seconds ago      Exited              etcd                      2                   fb00e53c9b63c
	011298a92bb2d       0346dbd74bcb9       25 seconds ago      Exited              kube-apiserver            1                   abaad9f545b48
	6d856ae1bb2e1       6039992312758       25 seconds ago      Exited              kube-controller-manager   1                   e9ee41cc9519e
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-161955
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-161955
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=65bfd3dc2bf9824cf305579b01895f56b2ba9210
	                    minikube.k8s.io/name=kubernetes-upgrade-161955
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_11_01T16_28_43_0700
	                    minikube.k8s.io/version=v1.27.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Nov 2022 23:28:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-161955
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Nov 2022 23:29:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Nov 2022 23:29:12 +0000   Tue, 01 Nov 2022 23:28:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Nov 2022 23:29:12 +0000   Tue, 01 Nov 2022 23:28:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Nov 2022 23:29:12 +0000   Tue, 01 Nov 2022 23:28:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Nov 2022 23:29:12 +0000   Tue, 01 Nov 2022 23:28:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-161955
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085668Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085668Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                a4d961b3-ddd5-40cb-b2cb-bc8f57179e22
	  Boot ID:                    8739ca78-7f51-402c-a5f3-69d5b4815b8f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-161955                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         34s
	  kube-system                 kube-apiserver-kubernetes-upgrade-161955             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-161955    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-scheduler-kubernetes-upgrade-161955             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  40s (x4 over 41s)  kubelet  Node kubernetes-upgrade-161955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x4 over 41s)  kubelet  Node kubernetes-upgrade-161955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x4 over 41s)  kubelet  Node kubernetes-upgrade-161955 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s                kubelet  Node kubernetes-upgrade-161955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet  Node kubernetes-upgrade-161955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet  Node kubernetes-upgrade-161955 status is now: NodeHasSufficientPID
	  Normal  NodeReady                34s                kubelet  Node kubernetes-upgrade-161955 status is now: NodeReady
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x7 over 9s)    kubelet  Node kubernetes-upgrade-161955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x7 over 9s)    kubelet  Node kubernetes-upgrade-161955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x6 over 9s)    kubelet  Node kubernetes-upgrade-161955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.001739] FS-Cache: O-key=[8] '4c24890400000000'
	[  +0.001169] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.001593] FS-Cache: N-cookie d=00000000958d8b47{9p.inode} n=0000000014bb51dd
	[  +0.001696] FS-Cache: N-key=[8] '4c24890400000000'
	[  +0.002175] FS-Cache: Duplicate cookie detected
	[  +0.001064] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.001587] FS-Cache: O-cookie d=00000000958d8b47{9p.inode} n=000000004c921b4a
	[  +0.001716] FS-Cache: O-key=[8] '4c24890400000000'
	[  +0.001174] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.001573] FS-Cache: N-cookie d=00000000958d8b47{9p.inode} n=00000000eeddb71f
	[  +0.001679] FS-Cache: N-key=[8] '4c24890400000000'
	[  +3.608383] FS-Cache: Duplicate cookie detected
	[  +0.001077] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.001569] FS-Cache: O-cookie d=00000000958d8b47{9p.inode} n=0000000011e1bcd4
	[  +0.001704] FS-Cache: O-key=[8] '4b24890400000000'
	[  +0.001146] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.001542] FS-Cache: N-cookie d=00000000958d8b47{9p.inode} n=0000000055099572
	[  +0.001688] FS-Cache: N-key=[8] '4b24890400000000'
	[  +0.405242] FS-Cache: Duplicate cookie detected
	[  +0.001072] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.001559] FS-Cache: O-cookie d=00000000958d8b47{9p.inode} n=000000000ab52bbf
	[  +0.001716] FS-Cache: O-key=[8] '5424890400000000'
	[  +0.001164] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.001556] FS-Cache: N-cookie d=00000000958d8b47{9p.inode} n=0000000078603da9
	[  +0.001691] FS-Cache: N-key=[8] '5424890400000000'
	
	* 
	* ==> etcd [ab896f0b389b] <==
	* {"level":"info","ts":"2022-11-01T23:29:02.031Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-01T23:29:02.031Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-01T23:29:02.031Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-01T23:29:03.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-11-01T23:29:03.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-11-01T23:29:03.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-11-01T23:29:03.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-11-01T23:29:03.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-11-01T23:29:03.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-11-01T23:29:03.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-11-01T23:29:03.129Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-161955 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-01T23:29:03.129Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-01T23:29:03.129Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-01T23:29:03.130Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-01T23:29:03.130Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-01T23:29:03.131Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-11-01T23:29:03.131Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-11-01T23:29:06.332Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-01T23:29:06.332Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-161955","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/11/01 23:29:06 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/11/01 23:29:06 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-01T23:29:06.341Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-11-01T23:29:06.343Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-01T23:29:06.345Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-01T23:29:06.345Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-161955","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [e4647665a971] <==
	* {"level":"info","ts":"2022-11-01T23:29:09.678Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-11-01T23:29:09.679Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-11-01T23:29:09.679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-11-01T23:29:09.679Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-11-01T23:29:09.679Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-01T23:29:09.679Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-01T23:29:09.681Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-11-01T23:29:09.681Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-01T23:29:09.681Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-01T23:29:09.681Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-01T23:29:09.681Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-01T23:29:11.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2022-11-01T23:29:11.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-11-01T23:29:11.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-11-01T23:29:11.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2022-11-01T23:29:11.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-11-01T23:29:11.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2022-11-01T23:29:11.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-11-01T23:29:11.076Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-161955 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-01T23:29:11.076Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-01T23:29:11.076Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-01T23:29:11.077Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-01T23:29:11.077Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-01T23:29:11.077Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-11-01T23:29:11.077Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  23:29:17 up 58 min,  0 users,  load average: 2.12, 1.39, 1.11
	Linux kubernetes-upgrade-161955 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [011298a92bb2] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 23:29:06.337095       1 logging.go:59] [core] [Channel #44 SubChannel #45] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 23:29:06.337134       1 logging.go:59] [core] [Channel #128 SubChannel #129] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 23:29:06.342939       1 logging.go:59] [core] [Channel #47 SubChannel #48] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [e57ac10afd6c] <==
	* I1101 23:29:12.669686       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1101 23:29:12.669718       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1101 23:29:12.669726       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1101 23:29:12.671888       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I1101 23:29:12.671919       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1101 23:29:12.671963       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1101 23:29:12.671968       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1101 23:29:12.671973       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E1101 23:29:12.674770       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1101 23:29:12.677208       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 23:29:12.678009       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 23:29:12.734621       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1101 23:29:12.753850       1 cache.go:39] Caches are synced for autoregister controller
	I1101 23:29:12.754045       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1101 23:29:12.754111       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 23:29:12.754995       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1101 23:29:12.772354       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 23:29:12.781740       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 23:29:13.479040       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1101 23:29:13.671243       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 23:29:14.261756       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1101 23:29:14.268382       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1101 23:29:14.285699       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1101 23:29:14.297665       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 23:29:14.301469       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [6d856ae1bb2e] <==
	* I1101 23:29:06.238156       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for deployments.apps
	W1101 23:29:06.238169       1 shared_informer.go:533] resyncPeriod 15h53m0.876641825s is smaller than resyncCheckPeriod 18h48m24.145142563s and the informer has already started. Changing it to 18h48m24.145142563s
	I1101 23:29:06.238211       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I1101 23:29:06.238253       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for statefulsets.apps
	I1101 23:29:06.238264       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	I1101 23:29:06.238313       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I1101 23:29:06.238323       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I1101 23:29:06.238348       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for limitranges
	I1101 23:29:06.238380       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpoints
	I1101 23:29:06.238393       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for cronjobs.batch
	I1101 23:29:06.238451       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I1101 23:29:06.238496       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for serviceaccounts
	I1101 23:29:06.238552       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for daemonsets.apps
	I1101 23:29:06.238570       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for jobs.batch
	I1101 23:29:06.238581       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I1101 23:29:06.238641       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I1101 23:29:06.238656       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates
	I1101 23:29:06.238666       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I1101 23:29:06.238750       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I1101 23:29:06.238784       1 controllermanager.go:603] Started "resourcequota"
	I1101 23:29:06.238872       1 resource_quota_controller.go:277] Starting resource quota controller
	I1101 23:29:06.238891       1 shared_informer.go:255] Waiting for caches to sync for resource quota
	I1101 23:29:06.238905       1 resource_quota_monitor.go:295] QuotaMonitor running
	I1101 23:29:06.240567       1 node_ipam_controller.go:91] Sending events to api server.
	I1101 23:29:06.313937       1 shared_informer.go:262] Caches are synced for tokens
	
	* 
	* ==> kube-controller-manager [adfcb5729ed6] <==
	* I1101 23:29:14.771079       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1101 23:29:14.771045       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client"
	I1101 23:29:14.771363       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1101 23:29:14.771401       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
	I1101 23:29:14.771412       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1101 23:29:14.771443       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1101 23:29:14.771476       1 controllermanager.go:603] Started "csrsigning"
	I1101 23:29:14.771502       1 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
	I1101 23:29:14.771506       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1101 23:29:14.771515       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1101 23:29:14.826374       1 controllermanager.go:603] Started "clusterrole-aggregation"
	I1101 23:29:14.826460       1 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
	I1101 23:29:14.826878       1 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator
	I1101 23:29:14.971573       1 controllermanager.go:603] Started "root-ca-cert-publisher"
	I1101 23:29:14.971721       1 publisher.go:107] Starting root CA certificate configmap publisher
	I1101 23:29:14.971740       1 shared_informer.go:255] Waiting for caches to sync for crt configmap
	I1101 23:29:15.020133       1 controllermanager.go:603] Started "endpoint"
	I1101 23:29:15.020366       1 endpoints_controller.go:182] Starting endpoint controller
	I1101 23:29:15.020399       1 shared_informer.go:255] Waiting for caches to sync for endpoint
	I1101 23:29:15.071228       1 controllermanager.go:603] Started "cronjob"
	I1101 23:29:15.071323       1 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
	I1101 23:29:15.071374       1 shared_informer.go:255] Waiting for caches to sync for cronjob
	I1101 23:29:15.121269       1 controllermanager.go:603] Started "csrcleaner"
	I1101 23:29:15.121414       1 cleaner.go:82] Starting CSR cleaner controller
	I1101 23:29:15.170932       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [123bdef3b120] <==
	* I1101 23:29:09.334752       1 serving.go:348] Generated self-signed cert in-memory
	W1101 23:29:12.681417       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 23:29:12.681457       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 23:29:12.681469       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 23:29:12.681474       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 23:29:12.692489       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1101 23:29:12.692525       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 23:29:12.693480       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 23:29:12.693525       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 23:29:12.693549       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 23:29:12.693571       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 23:29:12.794394       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [2f637ba60e82] <==
	* I1101 23:29:02.549811       1 serving.go:348] Generated self-signed cert in-memory
	I1101 23:29:04.756909       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1101 23:29:04.756946       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 23:29:04.760349       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 23:29:04.760405       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 23:29:04.760787       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 23:29:04.760885       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 23:29:04.761007       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 23:29:04.760889       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 23:29:04.762980       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 23:29:04.763010       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 23:29:04.861499       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 23:29:04.864044       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I1101 23:29:04.866533       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 23:29:06.332595       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1101 23:29:06.332940       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1101 23:29:06.333468       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 23:29:06.333484       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
	I1101 23:29:06.333500       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1101 23:29:06.333698       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-11-01 23:24:09 UTC, end at Tue 2022-11-01 23:29:18 UTC. --
	Nov 01 23:29:10 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:10.635058   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:10 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:10.735905   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:10 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:10.836580   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:10 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:10.937028   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.037416   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.138100   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.238503   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.339378   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.440441   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.541098   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.642014   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.742685   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.843454   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:11 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:11.944543   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:12 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:12.045815   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:12 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:12.147134   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:12 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:12.248090   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:12 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:12.349163   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:12 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:12.450381   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:12 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:12.551643   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:12 kubernetes-upgrade-161955 kubelet[13492]: E1101 23:29:12.651920   13492 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-161955\" not found"
	Nov 01 23:29:12 kubernetes-upgrade-161955 kubelet[13492]: I1101 23:29:12.771994   13492 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-161955"
	Nov 01 23:29:12 kubernetes-upgrade-161955 kubelet[13492]: I1101 23:29:12.772128   13492 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-161955"
	Nov 01 23:29:13 kubernetes-upgrade-161955 kubelet[13492]: I1101 23:29:13.166846   13492 apiserver.go:52] "Watching apiserver"
	Nov 01 23:29:13 kubernetes-upgrade-161955 kubelet[13492]: I1101 23:29:13.256849   13492 reconciler.go:169] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-161955 -n kubernetes-upgrade-161955
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-161955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-161955 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-161955 describe pod storage-provisioner: exit status 1 (55.476969ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-161955 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-161955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-161955
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-161955: (2.845805858s)
--- FAIL: TestKubernetesUpgrade (566.65s)

                                                
                                    
x
+
TestMissingContainerUpgrade (73.61s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3299974075.exe start -p missing-upgrade-161859 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3299974075.exe start -p missing-upgrade-161859 --memory=2200 --driver=docker : exit status 78 (53.726092324s)

                                                
                                                
-- stdout --
	! [missing-upgrade-161859] minikube v1.9.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-161859
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-161859" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.27.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.27.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 31.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 76.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 159.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 291.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 347.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 402.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 455.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 506.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: prov
isioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:19:34.522103969 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-161859" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:19:51.958103925 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3299974075.exe start -p missing-upgrade-161859 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3299974075.exe start -p missing-upgrade-161859 --memory=2200 --driver=docker : exit status 70 (8.979846928s)

                                                
                                                
-- stdout --
	* [missing-upgrade-161859] minikube v1.9.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-161859
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Updating the running docker "missing-upgrade-161859" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.71 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 22.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 68.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 159.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 233.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 382.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 526.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3299974075.exe start -p missing-upgrade-161859 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.3299974075.exe start -p missing-upgrade-161859 --memory=2200 --driver=docker : exit status 70 (4.4741918s)

                                                
                                                
-- stdout --
	* [missing-upgrade-161859] minikube v1.9.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-161859
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-161859" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2022-11-01 16:20:10.633088 -0700 PDT m=+2159.804423579
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-161859
helpers_test.go:235: (dbg) docker inspect missing-upgrade-161859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9dcd432846467031a9f95eb9359f0a8e4c0ca55c67122f3149729595cbb1da70",
	        "Created": "2022-11-01T23:19:42.776704883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 136154,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:19:43.100695617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/9dcd432846467031a9f95eb9359f0a8e4c0ca55c67122f3149729595cbb1da70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9dcd432846467031a9f95eb9359f0a8e4c0ca55c67122f3149729595cbb1da70/hostname",
	        "HostsPath": "/var/lib/docker/containers/9dcd432846467031a9f95eb9359f0a8e4c0ca55c67122f3149729595cbb1da70/hosts",
	        "LogPath": "/var/lib/docker/containers/9dcd432846467031a9f95eb9359f0a8e4c0ca55c67122f3149729595cbb1da70/9dcd432846467031a9f95eb9359f0a8e4c0ca55c67122f3149729595cbb1da70-json.log",
	        "Name": "/missing-upgrade-161859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-161859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9e8905549170beddd59f65aede6eccd474323b5d71d195f01cb0d5714c982a88-init/diff:/var/lib/docker/overlay2/9b283dac4f8b00fc66fa708cb4cefffa2996a70f4c229c15a28857048d7fdc88/diff:/var/lib/docker/overlay2/9ff108c965a9e06297091fb60e1521df68ecca05f3bef91142384dd37682e9fd/diff:/var/lib/docker/overlay2/9a55f45d33cbd87c6de64029a4c52052bc6337a9097fd25aa6b36f86fa64babd/diff:/var/lib/docker/overlay2/e1faa3d5318ed4d9ca7612256898e71cf2581d6ac2b41e74a82dcd4cd670685d/diff:/var/lib/docker/overlay2/0a7243b364227f81c6452f48b10ae614c7855e80e3ff85293aefec5b833e7295/diff:/var/lib/docker/overlay2/b2ad7fc463e128ecec363569c0ae8df97d5c4b2f9fdecd40d9775107e72c7db8/diff:/var/lib/docker/overlay2/0e7b2bd1402edaac22f1033f537a746786d9cdca6011c016b530c43c0609d7a0/diff:/var/lib/docker/overlay2/b7e2d4fff4eb761745add37347781b535e1d47ed10c1578bcef4d485ef849dd7/diff:/var/lib/docker/overlay2/300a951ced5e48e6f36334978a28da36fb5c6f2798c1138f2c8d358d3879a393/diff:/var/lib/docker/overlay2/d38191
d177365dae8ededbfc60b2b78c1f808237cb035105450c0fd7be258ac8/diff:/var/lib/docker/overlay2/8033d2d34fac3efba9e541516f559169ffc7b17d8530acb48a984396e4cce761/diff:/var/lib/docker/overlay2/ca5d4ba98f2706cf50fffc0bf9bbd96827d8923c63fce44c0cff3a083dd4d065/diff:/var/lib/docker/overlay2/a343b83f46f7302662a173eb2cf5c44b3f4ef4d53296704d932c198a9fe6b604/diff:/var/lib/docker/overlay2/ebdd14eb9316a922b2d55499a25917e46616991e9c6c31472554485544169f2e/diff:/var/lib/docker/overlay2/e012ab724b9e76a7a06ff5eeb9ab8099e78fc23dc49c8f071596fe0bc00a5818/diff:/var/lib/docker/overlay2/8b031095c98c34d5e370f48cb0c674a4b8f285a5e4fb78c3a76fef2df39bbd45/diff:/var/lib/docker/overlay2/15545188dde4f134f6209204348887681525e1d6f278c58c6f2e06985981fef0/diff:/var/lib/docker/overlay2/15f4ce84eabb3032bd29513036b1cfac1c2ce9f69d4b739926505fc276f48a3a/diff:/var/lib/docker/overlay2/3f1f5f82e85a8089620dfca13ee08df8382bc91b714abb87a4b7b9fef53ae811/diff:/var/lib/docker/overlay2/1b4b066ede35d5a92ced78a2d12583e508425b65997a7014db4f85fd466b28d0/diff:/var/lib/d
ocker/overlay2/8930de7c458b0d48d7dfb70a64fb4e54c4b9ff1db71d4af5c6241ade8dffec63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e8905549170beddd59f65aede6eccd474323b5d71d195f01cb0d5714c982a88/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e8905549170beddd59f65aede6eccd474323b5d71d195f01cb0d5714c982a88/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e8905549170beddd59f65aede6eccd474323b5d71d195f01cb0d5714c982a88/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-161859",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-161859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-161859",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-161859",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-161859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8a4f14dc24faeff01171d00c643a442805add8c42e647e1d5658b014c2c6f11",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51972"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51974"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f8a4f14dc24f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "41e5de1432dc78f84fef933825b30d0440bab091cf2eafd511bd1c09da853a2c",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "3a928b6ecab9e52c08e04ca22dcaf610bf12a0b525dfd095dafa40b3be35cf51",
	                    "EndpointID": "41e5de1432dc78f84fef933825b30d0440bab091cf2eafd511bd1c09da853a2c",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-161859 -n missing-upgrade-161859
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-161859 -n missing-upgrade-161859: exit status 6 (415.708512ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:20:11.098916   11991 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-161859" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-161859" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-161859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-161859
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-161859: (2.356232926s)
--- FAIL: TestMissingContainerUpgrade (73.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (45.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.717068278.exe start -p stopped-upgrade-162013 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.717068278.exe start -p stopped-upgrade-162013 --memory=2200 --vm-driver=docker : exit status 70 (33.900156631s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-162013] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3779789190
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:20:28.355146985 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-162013" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:20:45.494146941 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-162013", then "minikube start -p stopped-upgrade-162013 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 28.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.48 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 83.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 154.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 274.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 363.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 439.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:20:45.494146941 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.717068278.exe start -p stopped-upgrade-162013 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.717068278.exe start -p stopped-upgrade-162013 --memory=2200 --vm-driver=docker : exit status 70 (4.306834731s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-162013] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3362292655
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-162013" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.717068278.exe start -p stopped-upgrade-162013 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.717068278.exe start -p stopped-upgrade-162013 --memory=2200 --vm-driver=docker : exit status 70 (4.46698931s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-162013] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1664104435
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-162013" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (45.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (249.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-163757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E1101 16:37:59.480528    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-163757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m9.345213805s)

                                                
                                                
-- stdout --
	* [old-k8s-version-163757] minikube v1.27.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-163757 in cluster old-k8s-version-163757
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 16:37:57.976926   16937 out.go:296] Setting OutFile to fd 1 ...
	I1101 16:37:57.977112   16937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:37:57.977117   16937 out.go:309] Setting ErrFile to fd 2...
	I1101 16:37:57.977122   16937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:37:57.977232   16937 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 16:37:57.977808   16937 out.go:303] Setting JSON to false
	I1101 16:37:57.996343   16937 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4052,"bootTime":1667341825,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 16:37:57.996430   16937 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 16:37:58.018206   16937 out.go:177] * [old-k8s-version-163757] minikube v1.27.1 on Darwin 13.0
	I1101 16:37:58.059873   16937 notify.go:220] Checking for updates...
	I1101 16:37:58.081188   16937 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 16:37:58.102952   16937 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 16:37:58.124151   16937 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 16:37:58.146281   16937 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 16:37:58.168045   16937 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 16:37:58.189571   16937 config.go:180] Loaded profile config "kubenet-161858": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 16:37:58.189683   16937 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 16:37:58.254717   16937 docker.go:137] docker version: linux-20.10.20
	I1101 16:37:58.254864   16937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:37:58.402287   16937 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-01 23:37:58.311826699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:37:58.424191   16937 out.go:177] * Using the docker driver based on user configuration
	I1101 16:37:58.445820   16937 start.go:282] selected driver: docker
	I1101 16:37:58.445839   16937 start.go:808] validating driver "docker" against <nil>
	I1101 16:37:58.445856   16937 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 16:37:58.448436   16937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:37:58.595011   16937 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-01 23:37:58.505547458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:37:58.595139   16937 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1101 16:37:58.595280   16937 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 16:37:58.616939   16937 out.go:177] * Using Docker Desktop driver with root privileges
	I1101 16:37:58.638905   16937 cni.go:95] Creating CNI manager for ""
	I1101 16:37:58.638936   16937 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:37:58.638972   16937 start_flags.go:317] config:
	{Name:old-k8s-version-163757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-163757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:37:58.660897   16937 out.go:177] * Starting control plane node old-k8s-version-163757 in cluster old-k8s-version-163757
	I1101 16:37:58.703997   16937 cache.go:120] Beginning downloading kic base image for docker with docker
	I1101 16:37:58.725776   16937 out.go:177] * Pulling base image ...
	I1101 16:37:58.767746   16937 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 16:37:58.767807   16937 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 16:37:58.767844   16937 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1101 16:37:58.767863   16937 cache.go:57] Caching tarball of preloaded images
	I1101 16:37:58.768106   16937 preload.go:174] Found /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 16:37:58.768119   16937 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1101 16:37:58.768836   16937 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/config.json ...
	I1101 16:37:58.768931   16937 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/config.json: {Name:mk3d3fd5753d2c1ea746c85af49f9696100b930b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:37:58.825426   16937 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 16:37:58.825464   16937 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 16:37:58.825474   16937 cache.go:208] Successfully downloaded all kic artifacts
	I1101 16:37:58.825543   16937 start.go:364] acquiring machines lock for old-k8s-version-163757: {Name:mk05bab20388c402631f99670934058a4ed5d425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 16:37:58.825718   16937 start.go:368] acquired machines lock for "old-k8s-version-163757" in 162.819µs
	I1101 16:37:58.825761   16937 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-163757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-163757 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 16:37:58.825844   16937 start.go:125] createHost starting for "" (driver="docker")
	I1101 16:37:58.869338   16937 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1101 16:37:58.869761   16937 start.go:159] libmachine.API.Create for "old-k8s-version-163757" (driver="docker")
	I1101 16:37:58.869821   16937 client.go:168] LocalClient.Create starting
	I1101 16:37:58.870058   16937 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem
	I1101 16:37:58.870196   16937 main.go:134] libmachine: Decoding PEM data...
	I1101 16:37:58.870243   16937 main.go:134] libmachine: Parsing certificate...
	I1101 16:37:58.870379   16937 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem
	I1101 16:37:58.870465   16937 main.go:134] libmachine: Decoding PEM data...
	I1101 16:37:58.870488   16937 main.go:134] libmachine: Parsing certificate...
	I1101 16:37:58.871208   16937 cli_runner.go:164] Run: docker network inspect old-k8s-version-163757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 16:37:58.927682   16937 cli_runner.go:211] docker network inspect old-k8s-version-163757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 16:37:58.927799   16937 network_create.go:272] running [docker network inspect old-k8s-version-163757] to gather additional debugging logs...
	I1101 16:37:58.927823   16937 cli_runner.go:164] Run: docker network inspect old-k8s-version-163757
	W1101 16:37:58.986541   16937 cli_runner.go:211] docker network inspect old-k8s-version-163757 returned with exit code 1
	I1101 16:37:58.986568   16937 network_create.go:275] error running [docker network inspect old-k8s-version-163757]: docker network inspect old-k8s-version-163757: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-163757
	I1101 16:37:58.986580   16937 network_create.go:277] output of [docker network inspect old-k8s-version-163757]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-163757
	
	** /stderr **
	I1101 16:37:58.986680   16937 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 16:37:59.044339   16937 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000558538] misses:0}
	I1101 16:37:59.044377   16937 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:37:59.044390   16937 network_create.go:115] attempt to create docker network old-k8s-version-163757 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 16:37:59.044479   16937 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-163757 old-k8s-version-163757
	W1101 16:37:59.102441   16937 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-163757 old-k8s-version-163757 returned with exit code 1
	W1101 16:37:59.102479   16937 network_create.go:107] failed to create docker network old-k8s-version-163757 192.168.49.0/24, will retry: subnet is taken
	I1101 16:37:59.102743   16937 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000558538] amended:false}} dirty:map[] misses:0}
	I1101 16:37:59.102761   16937 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:37:59.102983   16937 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000558538] amended:true}} dirty:map[192.168.49.0:0xc000558538 192.168.58.0:0xc00013d0e0] misses:0}
	I1101 16:37:59.102997   16937 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:37:59.103016   16937 network_create.go:115] attempt to create docker network old-k8s-version-163757 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1101 16:37:59.103111   16937 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-163757 old-k8s-version-163757
	W1101 16:37:59.159727   16937 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-163757 old-k8s-version-163757 returned with exit code 1
	W1101 16:37:59.159766   16937 network_create.go:107] failed to create docker network old-k8s-version-163757 192.168.58.0/24, will retry: subnet is taken
	I1101 16:37:59.160030   16937 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000558538] amended:true}} dirty:map[192.168.49.0:0xc000558538 192.168.58.0:0xc00013d0e0] misses:1}
	I1101 16:37:59.160047   16937 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:37:59.160255   16937 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000558538] amended:true}} dirty:map[192.168.49.0:0xc000558538 192.168.58.0:0xc00013d0e0 192.168.67.0:0xc000012ca8] misses:1}
	I1101 16:37:59.160271   16937 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:37:59.160278   16937 network_create.go:115] attempt to create docker network old-k8s-version-163757 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1101 16:37:59.160367   16937 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-163757 old-k8s-version-163757
	W1101 16:37:59.217609   16937 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-163757 old-k8s-version-163757 returned with exit code 1
	W1101 16:37:59.217647   16937 network_create.go:107] failed to create docker network old-k8s-version-163757 192.168.67.0/24, will retry: subnet is taken
	I1101 16:37:59.217885   16937 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000558538] amended:true}} dirty:map[192.168.49.0:0xc000558538 192.168.58.0:0xc00013d0e0 192.168.67.0:0xc000012ca8] misses:2}
	I1101 16:37:59.217903   16937 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:37:59.218113   16937 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000558538] amended:true}} dirty:map[192.168.49.0:0xc000558538 192.168.58.0:0xc00013d0e0 192.168.67.0:0xc000012ca8 192.168.76.0:0xc000012ce0] misses:2}
	I1101 16:37:59.218126   16937 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 16:37:59.218133   16937 network_create.go:115] attempt to create docker network old-k8s-version-163757 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 16:37:59.218210   16937 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-163757 old-k8s-version-163757
	I1101 16:37:59.308171   16937 network_create.go:99] docker network old-k8s-version-163757 192.168.76.0/24 created
	I1101 16:37:59.308210   16937 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-163757" container
	I1101 16:37:59.308361   16937 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 16:37:59.368149   16937 cli_runner.go:164] Run: docker volume create old-k8s-version-163757 --label name.minikube.sigs.k8s.io=old-k8s-version-163757 --label created_by.minikube.sigs.k8s.io=true
	I1101 16:37:59.427869   16937 oci.go:103] Successfully created a docker volume old-k8s-version-163757
	I1101 16:37:59.427988   16937 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-163757-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-163757 --entrypoint /usr/bin/test -v old-k8s-version-163757:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1101 16:37:59.886487   16937 oci.go:107] Successfully prepared a docker volume old-k8s-version-163757
	I1101 16:37:59.886538   16937 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 16:37:59.886555   16937 kic.go:179] Starting extracting preloaded images to volume ...
	I1101 16:37:59.886662   16937 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-163757:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 16:38:04.038556   16937 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-163757:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.151870346s)
	I1101 16:38:04.038577   16937 kic.go:188] duration metric: took 4.152052 seconds to extract preloaded images to volume
	I1101 16:38:04.038690   16937 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 16:38:04.187301   16937 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-163757 --name old-k8s-version-163757 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-163757 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-163757 --network old-k8s-version-163757 --ip 192.168.76.2 --volume old-k8s-version-163757:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1101 16:38:04.553084   16937 cli_runner.go:164] Run: docker container inspect old-k8s-version-163757 --format={{.State.Running}}
	I1101 16:38:04.615559   16937 cli_runner.go:164] Run: docker container inspect old-k8s-version-163757 --format={{.State.Status}}
	I1101 16:38:04.682209   16937 cli_runner.go:164] Run: docker exec old-k8s-version-163757 stat /var/lib/dpkg/alternatives/iptables
	I1101 16:38:04.794582   16937 oci.go:144] the created container "old-k8s-version-163757" has a running status.
	I1101 16:38:04.794627   16937 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa...
	I1101 16:38:05.014578   16937 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 16:38:05.118115   16937 cli_runner.go:164] Run: docker container inspect old-k8s-version-163757 --format={{.State.Status}}
	I1101 16:38:05.181364   16937 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 16:38:05.181382   16937 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-163757 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 16:38:05.291282   16937 cli_runner.go:164] Run: docker container inspect old-k8s-version-163757 --format={{.State.Status}}
	I1101 16:38:05.351546   16937 machine.go:88] provisioning docker machine ...
	I1101 16:38:05.351594   16937 ubuntu.go:169] provisioning hostname "old-k8s-version-163757"
	I1101 16:38:05.351723   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:05.412973   16937 main.go:134] libmachine: Using SSH client type: native
	I1101 16:38:05.413162   16937 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53802 <nil> <nil>}
	I1101 16:38:05.413179   16937 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163757 && echo "old-k8s-version-163757" | sudo tee /etc/hostname
	I1101 16:38:05.537523   16937 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163757
	
	I1101 16:38:05.537669   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:05.596849   16937 main.go:134] libmachine: Using SSH client type: native
	I1101 16:38:05.597014   16937 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53802 <nil> <nil>}
	I1101 16:38:05.597028   16937 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163757/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 16:38:05.718444   16937 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 16:38:05.718461   16937 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15232-2108/.minikube CaCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15232-2108/.minikube}
	I1101 16:38:05.718477   16937 ubuntu.go:177] setting up certificates
	I1101 16:38:05.718485   16937 provision.go:83] configureAuth start
	I1101 16:38:05.718573   16937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163757
	I1101 16:38:05.778632   16937 provision.go:138] copyHostCerts
	I1101 16:38:05.778734   16937 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem, removing ...
	I1101 16:38:05.778743   16937 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
	I1101 16:38:05.778868   16937 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem (1123 bytes)
	I1101 16:38:05.779063   16937 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem, removing ...
	I1101 16:38:05.779069   16937 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
	I1101 16:38:05.779138   16937 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem (1675 bytes)
	I1101 16:38:05.779301   16937 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem, removing ...
	I1101 16:38:05.779307   16937 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
	I1101 16:38:05.779371   16937 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem (1082 bytes)
	I1101 16:38:05.779488   16937 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163757 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-163757]
	I1101 16:38:05.824747   16937 provision.go:172] copyRemoteCerts
	I1101 16:38:05.824809   16937 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 16:38:05.824870   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:05.885972   16937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53802 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:38:05.973483   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 16:38:05.992101   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 16:38:06.011057   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 16:38:06.030301   16937 provision.go:86] duration metric: configureAuth took 311.804136ms
	I1101 16:38:06.030315   16937 ubuntu.go:193] setting minikube options for container-runtime
	I1101 16:38:06.030462   16937 config.go:180] Loaded profile config "old-k8s-version-163757": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1101 16:38:06.030544   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:06.091125   16937 main.go:134] libmachine: Using SSH client type: native
	I1101 16:38:06.091292   16937 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53802 <nil> <nil>}
	I1101 16:38:06.091307   16937 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 16:38:06.209112   16937 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1101 16:38:06.209129   16937 ubuntu.go:71] root file system type: overlay
	I1101 16:38:06.209279   16937 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 16:38:06.209379   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:06.269885   16937 main.go:134] libmachine: Using SSH client type: native
	I1101 16:38:06.270053   16937 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53802 <nil> <nil>}
	I1101 16:38:06.270103   16937 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 16:38:06.398471   16937 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 16:38:06.398587   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:06.458586   16937 main.go:134] libmachine: Using SSH client type: native
	I1101 16:38:06.458756   16937 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53802 <nil> <nil>}
	I1101 16:38:06.458769   16937 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 16:38:07.091244   16937 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-01 23:38:06.406907963 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1101 16:38:07.091284   16937 machine.go:91] provisioned docker machine in 1.739727618s
	I1101 16:38:07.091291   16937 client.go:171] LocalClient.Create took 8.221525248s
	I1101 16:38:07.091328   16937 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-163757" took 8.221632076s
	I1101 16:38:07.091337   16937 start.go:300] post-start starting for "old-k8s-version-163757" (driver="docker")
	I1101 16:38:07.091342   16937 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 16:38:07.091427   16937 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 16:38:07.091503   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:07.152354   16937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53802 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:38:07.239570   16937 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 16:38:07.243124   16937 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 16:38:07.243138   16937 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 16:38:07.243169   16937 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 16:38:07.243189   16937 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 16:38:07.243201   16937 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/addons for local assets ...
	I1101 16:38:07.243303   16937 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/files for local assets ...
	I1101 16:38:07.243518   16937 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> 34132.pem in /etc/ssl/certs
	I1101 16:38:07.243723   16937 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 16:38:07.251190   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:38:07.268429   16937 start.go:303] post-start completed in 177.083144ms
	I1101 16:38:07.269007   16937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163757
	I1101 16:38:07.329141   16937 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/config.json ...
	I1101 16:38:07.329570   16937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 16:38:07.329653   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:07.389818   16937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53802 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:38:07.473434   16937 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 16:38:07.477779   16937 start.go:128] duration metric: createHost completed in 8.651987943s
	I1101 16:38:07.477796   16937 start.go:83] releasing machines lock for "old-k8s-version-163757", held for 8.65213336s
	I1101 16:38:07.477904   16937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163757
	I1101 16:38:07.537871   16937 ssh_runner.go:195] Run: systemctl --version
	I1101 16:38:07.537871   16937 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1101 16:38:07.537944   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:07.537972   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:07.602370   16937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53802 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:38:07.603185   16937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53802 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:38:07.685572   16937 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 16:38:07.940716   16937 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1101 16:38:07.940803   16937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 16:38:07.951507   16937 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 16:38:07.964553   16937 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 16:38:08.039131   16937 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 16:38:08.114478   16937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 16:38:08.189292   16937 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 16:38:08.389977   16937 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:38:08.419040   16937 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:38:08.492037   16937 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	I1101 16:38:08.492243   16937 cli_runner.go:164] Run: docker exec -t old-k8s-version-163757 dig +short host.docker.internal
	I1101 16:38:08.611278   16937 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1101 16:38:08.611400   16937 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1101 16:38:08.615672   16937 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 16:38:08.625871   16937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:38:08.685936   16937 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 16:38:08.686034   16937 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:38:08.710280   16937 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1101 16:38:08.710299   16937 docker.go:543] Images already preloaded, skipping extraction
	I1101 16:38:08.710391   16937 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:38:08.733131   16937 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1101 16:38:08.733151   16937 cache_images.go:84] Images are preloaded, skipping loading
	I1101 16:38:08.733263   16937 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 16:38:08.805811   16937 cni.go:95] Creating CNI manager for ""
	I1101 16:38:08.805825   16937 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:38:08.805842   16937 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 16:38:08.805859   16937 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163757 NodeName:old-k8s-version-163757 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 16:38:08.805973   16937 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-163757"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-163757
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 16:38:08.806049   16937 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-163757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-163757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 16:38:08.806126   16937 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1101 16:38:08.814148   16937 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 16:38:08.814214   16937 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 16:38:08.821204   16937 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1101 16:38:08.834704   16937 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 16:38:08.847331   16937 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1101 16:38:08.861286   16937 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 16:38:08.865366   16937 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 16:38:08.874948   16937 certs.go:54] Setting up /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757 for IP: 192.168.76.2
	I1101 16:38:08.875102   16937 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key
	I1101 16:38:08.875226   16937 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key
	I1101 16:38:08.875285   16937 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/client.key
	I1101 16:38:08.875306   16937 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/client.crt with IP's: []
	I1101 16:38:09.116879   16937 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/client.crt ...
	I1101 16:38:09.116902   16937 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/client.crt: {Name:mk1f98a48470c980bf9d87204f7412432d54a873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:38:09.117292   16937 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/client.key ...
	I1101 16:38:09.117301   16937 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/client.key: {Name:mk788ef114a70483dd1e1d995f429f97161614e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:38:09.117549   16937 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.key.31bdca25
	I1101 16:38:09.117592   16937 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 16:38:09.410590   16937 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.crt.31bdca25 ...
	I1101 16:38:09.410607   16937 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.crt.31bdca25: {Name:mk3be343810d4e3a0f7bf1b19912d31132169903 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:38:09.410902   16937 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.key.31bdca25 ...
	I1101 16:38:09.410910   16937 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.key.31bdca25: {Name:mk20cd7b0097912c832eac09b9c543f06f73f696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:38:09.411096   16937 certs.go:320] copying /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.crt
	I1101 16:38:09.411267   16937 certs.go:324] copying /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.key
	I1101 16:38:09.411452   16937 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.key
	I1101 16:38:09.411471   16937 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.crt with IP's: []
	I1101 16:38:09.536754   16937 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.crt ...
	I1101 16:38:09.536774   16937 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.crt: {Name:mkf8afe1b9a1b7fe0005801176329eeeda97ea5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:38:09.537074   16937 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.key ...
	I1101 16:38:09.537083   16937 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.key: {Name:mk48a662e23171d72d31658eb7281fa0f0b5bb0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:38:09.537566   16937 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem (1338 bytes)
	W1101 16:38:09.537619   16937 certs.go:384] ignoring /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413_empty.pem, impossibly tiny 0 bytes
	I1101 16:38:09.537634   16937 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 16:38:09.537670   16937 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem (1082 bytes)
	I1101 16:38:09.537706   16937 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem (1123 bytes)
	I1101 16:38:09.537741   16937 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem (1675 bytes)
	I1101 16:38:09.537813   16937 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:38:09.538304   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 16:38:09.557194   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 16:38:09.575501   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 16:38:09.592434   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 16:38:09.610238   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 16:38:09.627758   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 16:38:09.645430   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 16:38:09.663265   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 16:38:09.679869   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /usr/share/ca-certificates/34132.pem (1708 bytes)
	I1101 16:38:09.697783   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 16:38:09.714736   16937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem --> /usr/share/ca-certificates/3413.pem (1338 bytes)
	I1101 16:38:09.732183   16937 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 16:38:09.745130   16937 ssh_runner.go:195] Run: openssl version
	I1101 16:38:09.750762   16937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34132.pem && ln -fs /usr/share/ca-certificates/34132.pem /etc/ssl/certs/34132.pem"
	I1101 16:38:09.758694   16937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34132.pem
	I1101 16:38:09.762864   16937 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:49 /usr/share/ca-certificates/34132.pem
	I1101 16:38:09.762915   16937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34132.pem
	I1101 16:38:09.768554   16937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34132.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 16:38:09.777424   16937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 16:38:09.785447   16937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:38:09.789684   16937 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:38:09.789765   16937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:38:09.795412   16937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 16:38:09.803446   16937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3413.pem && ln -fs /usr/share/ca-certificates/3413.pem /etc/ssl/certs/3413.pem"
	I1101 16:38:09.811372   16937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3413.pem
	I1101 16:38:09.815390   16937 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:49 /usr/share/ca-certificates/3413.pem
	I1101 16:38:09.815439   16937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3413.pem
	I1101 16:38:09.820979   16937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3413.pem /etc/ssl/certs/51391683.0"
	I1101 16:38:09.829001   16937 kubeadm.go:396] StartCluster: {Name:old-k8s-version-163757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-163757 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:38:09.829212   16937 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 16:38:09.852443   16937 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 16:38:09.860096   16937 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 16:38:09.867693   16937 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 16:38:09.867751   16937 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:38:09.875883   16937 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 16:38:09.875919   16937 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 16:38:09.924340   16937 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1101 16:38:09.924394   16937 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 16:38:10.236730   16937 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 16:38:10.236867   16937 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 16:38:10.236990   16937 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 16:38:10.460609   16937 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 16:38:10.461598   16937 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 16:38:10.467814   16937 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1101 16:38:10.542515   16937 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 16:38:10.566868   16937 out.go:204]   - Generating certificates and keys ...
	I1101 16:38:10.566956   16937 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 16:38:10.567019   16937 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 16:38:10.896116   16937 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 16:38:10.981873   16937 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1101 16:38:11.121171   16937 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1101 16:38:11.316657   16937 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1101 16:38:11.381628   16937 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1101 16:38:11.381824   16937 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-163757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 16:38:11.775151   16937 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1101 16:38:11.775275   16937 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-163757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 16:38:12.048296   16937 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 16:38:12.257459   16937 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 16:38:12.521259   16937 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1101 16:38:12.521346   16937 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 16:38:12.648842   16937 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 16:38:12.747312   16937 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 16:38:13.014501   16937 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 16:38:13.124845   16937 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 16:38:13.125274   16937 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 16:38:13.146896   16937 out.go:204]   - Booting up control plane ...
	I1101 16:38:13.147042   16937 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 16:38:13.147213   16937 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 16:38:13.147333   16937 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 16:38:13.147482   16937 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 16:38:13.147699   16937 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 16:38:53.106280   16937 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 16:38:53.107098   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:38:53.107335   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:38:58.104982   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:38:58.105188   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:39:08.098159   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:39:08.098311   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:39:28.084427   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:39:28.084598   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:40:08.078674   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:40:08.078877   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:40:08.078889   16937 kubeadm.go:317] 
	I1101 16:40:08.078929   16937 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 16:40:08.078964   16937 kubeadm.go:317] 	timed out waiting for the condition
	I1101 16:40:08.078977   16937 kubeadm.go:317] 
	I1101 16:40:08.079037   16937 kubeadm.go:317] This error is likely caused by:
	I1101 16:40:08.079090   16937 kubeadm.go:317] 	- The kubelet is not running
	I1101 16:40:08.079205   16937 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 16:40:08.079215   16937 kubeadm.go:317] 
	I1101 16:40:08.079292   16937 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 16:40:08.079313   16937 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 16:40:08.079355   16937 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 16:40:08.079365   16937 kubeadm.go:317] 
	I1101 16:40:08.079454   16937 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 16:40:08.079527   16937 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1101 16:40:08.079587   16937 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1101 16:40:08.079625   16937 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1101 16:40:08.079693   16937 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 16:40:08.079716   16937 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1101 16:40:08.082212   16937 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 16:40:08.082335   16937 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1101 16:40:08.082408   16937 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 16:40:08.082466   16937 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 16:40:08.082523   16937 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1101 16:40:08.082667   16937 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-163757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-163757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-163757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-163757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1101 16:40:08.082694   16937 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1101 16:40:08.505619   16937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 16:40:08.515496   16937 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 16:40:08.515561   16937 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:40:08.522946   16937 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 16:40:08.522968   16937 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 16:40:08.570656   16937 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1101 16:40:08.570693   16937 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 16:40:08.863455   16937 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 16:40:08.863541   16937 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 16:40:08.863630   16937 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 16:40:09.098016   16937 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 16:40:09.099110   16937 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 16:40:09.106038   16937 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1101 16:40:09.186501   16937 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 16:40:09.208281   16937 out.go:204]   - Generating certificates and keys ...
	I1101 16:40:09.208406   16937 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 16:40:09.208492   16937 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 16:40:09.208612   16937 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 16:40:09.208688   16937 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 16:40:09.208748   16937 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 16:40:09.208805   16937 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 16:40:09.208867   16937 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 16:40:09.208940   16937 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 16:40:09.209034   16937 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 16:40:09.209094   16937 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 16:40:09.209135   16937 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 16:40:09.209177   16937 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 16:40:09.355622   16937 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 16:40:09.534369   16937 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 16:40:09.672695   16937 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 16:40:09.744276   16937 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 16:40:09.745413   16937 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 16:40:09.767220   16937 out.go:204]   - Booting up control plane ...
	I1101 16:40:09.767398   16937 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 16:40:09.767546   16937 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 16:40:09.767656   16937 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 16:40:09.767786   16937 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 16:40:09.768059   16937 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 16:40:49.734406   16937 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 16:40:49.735123   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:40:49.735332   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:40:54.732546   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:40:54.732707   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:41:04.727305   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:41:04.727498   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:41:24.714027   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:41:24.714226   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:42:04.686646   16937 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:42:04.686855   16937 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:42:04.686870   16937 kubeadm.go:317] 
	I1101 16:42:04.686905   16937 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 16:42:04.686943   16937 kubeadm.go:317] 	timed out waiting for the condition
	I1101 16:42:04.686948   16937 kubeadm.go:317] 
	I1101 16:42:04.687012   16937 kubeadm.go:317] This error is likely caused by:
	I1101 16:42:04.687067   16937 kubeadm.go:317] 	- The kubelet is not running
	I1101 16:42:04.687209   16937 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 16:42:04.687223   16937 kubeadm.go:317] 
	I1101 16:42:04.687325   16937 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 16:42:04.687368   16937 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 16:42:04.687407   16937 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 16:42:04.687415   16937 kubeadm.go:317] 
	I1101 16:42:04.687529   16937 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 16:42:04.687635   16937 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1101 16:42:04.687741   16937 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1101 16:42:04.687795   16937 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1101 16:42:04.687885   16937 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 16:42:04.687922   16937 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1101 16:42:04.690203   16937 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 16:42:04.690306   16937 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1101 16:42:04.690384   16937 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 16:42:04.690443   16937 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 16:42:04.690508   16937 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1101 16:42:04.690531   16937 kubeadm.go:398] StartCluster complete in 3m54.833975793s
	I1101 16:42:04.690630   16937 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:42:04.713586   16937 logs.go:274] 0 containers: []
	W1101 16:42:04.713597   16937 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:42:04.713680   16937 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:42:04.738878   16937 logs.go:274] 0 containers: []
	W1101 16:42:04.738888   16937 logs.go:276] No container was found matching "etcd"
	I1101 16:42:04.738969   16937 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:42:04.760938   16937 logs.go:274] 0 containers: []
	W1101 16:42:04.760950   16937 logs.go:276] No container was found matching "coredns"
	I1101 16:42:04.761032   16937 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:42:04.783149   16937 logs.go:274] 0 containers: []
	W1101 16:42:04.783162   16937 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:42:04.783245   16937 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:42:04.807546   16937 logs.go:274] 0 containers: []
	W1101 16:42:04.807556   16937 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:42:04.807637   16937 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:42:04.829796   16937 logs.go:274] 0 containers: []
	W1101 16:42:04.829808   16937 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:42:04.829893   16937 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:42:04.852418   16937 logs.go:274] 0 containers: []
	W1101 16:42:04.852430   16937 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:42:04.852514   16937 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:42:04.874628   16937 logs.go:274] 0 containers: []
	W1101 16:42:04.874647   16937 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:42:04.874655   16937 logs.go:123] Gathering logs for kubelet ...
	I1101 16:42:04.874663   16937 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:42:04.914041   16937 logs.go:123] Gathering logs for dmesg ...
	I1101 16:42:04.914055   16937 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:42:04.926338   16937 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:42:04.926358   16937 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:42:04.982014   16937 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:42:04.982025   16937 logs.go:123] Gathering logs for Docker ...
	I1101 16:42:04.982034   16937 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:42:04.998001   16937 logs.go:123] Gathering logs for container status ...
	I1101 16:42:04.998017   16937 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:42:07.046997   16937 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048987649s)
	W1101 16:42:07.047145   16937 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1101 16:42:07.047161   16937 out.go:239] * 
	* 
	W1101 16:42:07.047275   16937 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 16:42:07.047291   16937 out.go:239] * 
	* 
	W1101 16:42:07.047919   16937 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 16:42:07.143929   16937 out.go:177] 
	W1101 16:42:07.187900   16937 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 16:42:07.188021   16937 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1101 16:42:07.188145   16937 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1101 16:42:07.230635   16937 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-163757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-163757
helpers_test.go:235: (dbg) docker inspect old-k8s-version-163757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e",
	        "Created": "2022-11-01T23:38:04.256272958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:38:04.553352Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hostname",
	        "HostsPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hosts",
	        "LogPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e-json.log",
	        "Name": "/old-k8s-version-163757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-163757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163757",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5472cb82da8929c3d0db0f9316dc08cbb041f5d539a2b6c73a38333e68096a2a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53804"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53806"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5472cb82da89",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-163757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "68479d844c03",
	                        "old-k8s-version-163757"
	                    ],
	                    "NetworkID": "de11f6b0d4a3e9909764ae953f0f910d0d29438f96300416f12a7f896caa0f32",
	                    "EndpointID": "fbf513e735e71ad7a57cbd013d561aabe79fef5c74eebf9784b22027f797ce5a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 6 (437.097901ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:42:07.786798   17669 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-163757" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-163757" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (249.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (60.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1101 16:38:04.600792    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.397413902s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1101 16:38:10.238512    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.125847541s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 16:38:14.841482    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.108838485s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.117578954s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1101 16:38:32.018275    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:38:35.323696    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.132642875s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.10964807s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1101 16:39:02.321950    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.109939842s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (60.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-163757 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-163757 create -f testdata/busybox.yaml: exit status 1 (34.690568ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-163757" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-163757 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-163757
helpers_test.go:235: (dbg) docker inspect old-k8s-version-163757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e",
	        "Created": "2022-11-01T23:38:04.256272958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:38:04.553352Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hostname",
	        "HostsPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hosts",
	        "LogPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e-json.log",
	        "Name": "/old-k8s-version-163757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-163757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163757",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5472cb82da8929c3d0db0f9316dc08cbb041f5d539a2b6c73a38333e68096a2a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53804"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53806"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5472cb82da89",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-163757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "68479d844c03",
	                        "old-k8s-version-163757"
	                    ],
	                    "NetworkID": "de11f6b0d4a3e9909764ae953f0f910d0d29438f96300416f12a7f896caa0f32",
	                    "EndpointID": "fbf513e735e71ad7a57cbd013d561aabe79fef5c74eebf9784b22027f797ce5a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 6 (403.373546ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:42:08.319205   17682 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-163757" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-163757" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-163757
helpers_test.go:235: (dbg) docker inspect old-k8s-version-163757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e",
	        "Created": "2022-11-01T23:38:04.256272958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:38:04.553352Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hostname",
	        "HostsPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hosts",
	        "LogPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e-json.log",
	        "Name": "/old-k8s-version-163757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-163757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163757",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5472cb82da8929c3d0db0f9316dc08cbb041f5d539a2b6c73a38333e68096a2a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53804"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53806"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5472cb82da89",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-163757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "68479d844c03",
	                        "old-k8s-version-163757"
	                    ],
	                    "NetworkID": "de11f6b0d4a3e9909764ae953f0f910d0d29438f96300416f12a7f896caa0f32",
	                    "EndpointID": "fbf513e735e71ad7a57cbd013d561aabe79fef5c74eebf9784b22027f797ce5a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 6 (405.233605ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:42:08.784567   17696 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-163757" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-163757" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-163757 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1101 16:42:10.226679    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:42:10.801501    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:42:16.027582    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:42:19.508483    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 16:42:25.468210    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:42:30.708954    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:42:35.711869    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:35.717220    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:35.728791    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:35.750847    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:35.791882    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:35.871992    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:36.032262    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:36.353115    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:36.993318    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:38.273605    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:40.835826    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:44.340624    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 16:42:45.956153    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:42:49.833515    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:49.839858    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:49.852043    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:49.872211    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:49.912726    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:49.992920    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:50.153562    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:50.475833    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:51.116035    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:52.396488    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:54.388699    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:42:54.956634    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:42:56.197766    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:43:00.076833    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:43:10.317465    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:43:11.668695    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:43:16.677783    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:43:22.074364    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:43:30.797435    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:43:32.044997    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:43:32.721650    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-163757 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.192479796s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-163757 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-163757 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-163757 describe deploy/metrics-server -n kube-system: exit status 1 (34.958604ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-163757" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-163757 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-163757
helpers_test.go:235: (dbg) docker inspect old-k8s-version-163757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e",
	        "Created": "2022-11-01T23:38:04.256272958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:38:04.553352Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hostname",
	        "HostsPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hosts",
	        "LogPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e-json.log",
	        "Name": "/old-k8s-version-163757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-163757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163757",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5472cb82da8929c3d0db0f9316dc08cbb041f5d539a2b6c73a38333e68096a2a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53804"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53806"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5472cb82da89",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-163757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "68479d844c03",
	                        "old-k8s-version-163757"
	                    ],
	                    "NetworkID": "de11f6b0d4a3e9909764ae953f0f910d0d29438f96300416f12a7f896caa0f32",
	                    "EndpointID": "fbf513e735e71ad7a57cbd013d561aabe79fef5c74eebf9784b22027f797ce5a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 6 (394.921688ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:43:38.467248   17816 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-163757" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-163757" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (489.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-163757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E1101 16:43:57.638292    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:44:11.758517    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:44:33.588923    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:44:41.620464    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:45:09.308564    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:45:19.557673    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-163757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m4.703022008s)

                                                
                                                
-- stdout --
	* [old-k8s-version-163757] minikube v1.27.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-163757 in cluster old-k8s-version-163757
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-163757" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 16:43:40.495985   17846 out.go:296] Setting OutFile to fd 1 ...
	I1101 16:43:40.496172   17846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:43:40.496177   17846 out.go:309] Setting ErrFile to fd 2...
	I1101 16:43:40.496182   17846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:43:40.496309   17846 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 16:43:40.496811   17846 out.go:303] Setting JSON to false
	I1101 16:43:40.515643   17846 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4395,"bootTime":1667341825,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 16:43:40.515751   17846 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 16:43:40.537363   17846 out.go:177] * [old-k8s-version-163757] minikube v1.27.1 on Darwin 13.0
	I1101 16:43:40.579923   17846 notify.go:220] Checking for updates...
	I1101 16:43:40.601188   17846 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 16:43:40.622248   17846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 16:43:40.644221   17846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 16:43:40.666110   17846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 16:43:40.688269   17846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 16:43:40.710625   17846 config.go:180] Loaded profile config "old-k8s-version-163757": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1101 16:43:40.733160   17846 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1101 16:43:40.755015   17846 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 16:43:40.820507   17846 docker.go:137] docker version: linux-20.10.20
	I1101 16:43:40.820675   17846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:43:40.967082   17846 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-01 23:43:40.885457658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:43:41.010767   17846 out.go:177] * Using the docker driver based on existing profile
	I1101 16:43:41.032908   17846 start.go:282] selected driver: docker
	I1101 16:43:41.032938   17846 start.go:808] validating driver "docker" against &{Name:old-k8s-version-163757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-163757 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:43:41.033062   17846 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 16:43:41.036906   17846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:43:41.182788   17846 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-01 23:43:41.101561496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:43:41.182943   17846 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 16:43:41.182961   17846 cni.go:95] Creating CNI manager for ""
	I1101 16:43:41.182971   17846 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:43:41.182983   17846 start_flags.go:317] config:
	{Name:old-k8s-version-163757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-163757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:43:41.204415   17846 out.go:177] * Starting control plane node old-k8s-version-163757 in cluster old-k8s-version-163757
	I1101 16:43:41.225606   17846 cache.go:120] Beginning downloading kic base image for docker with docker
	I1101 16:43:41.267398   17846 out.go:177] * Pulling base image ...
	I1101 16:43:41.314285   17846 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 16:43:41.314325   17846 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 16:43:41.314393   17846 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1101 16:43:41.314415   17846 cache.go:57] Caching tarball of preloaded images
	I1101 16:43:41.314708   17846 preload.go:174] Found /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 16:43:41.314730   17846 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1101 16:43:41.315743   17846 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/config.json ...
	I1101 16:43:41.370894   17846 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 16:43:41.370912   17846 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 16:43:41.370923   17846 cache.go:208] Successfully downloaded all kic artifacts
	I1101 16:43:41.370966   17846 start.go:364] acquiring machines lock for old-k8s-version-163757: {Name:mk05bab20388c402631f99670934058a4ed5d425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 16:43:41.371058   17846 start.go:368] acquired machines lock for "old-k8s-version-163757" in 72.358µs
	I1101 16:43:41.371083   17846 start.go:96] Skipping create...Using existing machine configuration
	I1101 16:43:41.371093   17846 fix.go:55] fixHost starting: 
	I1101 16:43:41.371385   17846 cli_runner.go:164] Run: docker container inspect old-k8s-version-163757 --format={{.State.Status}}
	I1101 16:43:41.428649   17846 fix.go:103] recreateIfNeeded on old-k8s-version-163757: state=Stopped err=<nil>
	W1101 16:43:41.428675   17846 fix.go:129] unexpected machine state, will restart: <nil>
	I1101 16:43:41.450521   17846 out.go:177] * Restarting existing docker container for "old-k8s-version-163757" ...
	I1101 16:43:41.510336   17846 cli_runner.go:164] Run: docker start old-k8s-version-163757
	I1101 16:43:41.847871   17846 cli_runner.go:164] Run: docker container inspect old-k8s-version-163757 --format={{.State.Status}}
	I1101 16:43:41.910644   17846 kic.go:415] container "old-k8s-version-163757" state is running.
	I1101 16:43:41.911299   17846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163757
	I1101 16:43:41.976693   17846 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/config.json ...
	I1101 16:43:41.977130   17846 machine.go:88] provisioning docker machine ...
	I1101 16:43:41.977159   17846 ubuntu.go:169] provisioning hostname "old-k8s-version-163757"
	I1101 16:43:41.977232   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:42.049718   17846 main.go:134] libmachine: Using SSH client type: native
	I1101 16:43:42.050004   17846 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53982 <nil> <nil>}
	I1101 16:43:42.050023   17846 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163757 && echo "old-k8s-version-163757" | sudo tee /etc/hostname
	I1101 16:43:42.177667   17846 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163757
	
	I1101 16:43:42.177783   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:42.241302   17846 main.go:134] libmachine: Using SSH client type: native
	I1101 16:43:42.241474   17846 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53982 <nil> <nil>}
	I1101 16:43:42.241487   17846 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163757/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 16:43:42.361326   17846 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 16:43:42.361345   17846 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15232-2108/.minikube CaCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15232-2108/.minikube}
	I1101 16:43:42.361365   17846 ubuntu.go:177] setting up certificates
	I1101 16:43:42.361373   17846 provision.go:83] configureAuth start
	I1101 16:43:42.361464   17846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163757
	I1101 16:43:42.421814   17846 provision.go:138] copyHostCerts
	I1101 16:43:42.422003   17846 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem, removing ...
	I1101 16:43:42.422013   17846 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
	I1101 16:43:42.422231   17846 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem (1675 bytes)
	I1101 16:43:42.422482   17846 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem, removing ...
	I1101 16:43:42.422489   17846 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
	I1101 16:43:42.422558   17846 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem (1082 bytes)
	I1101 16:43:42.422719   17846 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem, removing ...
	I1101 16:43:42.422725   17846 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
	I1101 16:43:42.422789   17846 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem (1123 bytes)
	I1101 16:43:42.422922   17846 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163757 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-163757]
	I1101 16:43:42.483068   17846 provision.go:172] copyRemoteCerts
	I1101 16:43:42.483129   17846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 16:43:42.483203   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:42.542381   17846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53982 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:43:42.630429   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 16:43:42.647558   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 16:43:42.664777   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 16:43:42.681922   17846 provision.go:86] duration metric: configureAuth took 320.533041ms
	I1101 16:43:42.681935   17846 ubuntu.go:193] setting minikube options for container-runtime
	I1101 16:43:42.682092   17846 config.go:180] Loaded profile config "old-k8s-version-163757": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1101 16:43:42.682170   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:42.741754   17846 main.go:134] libmachine: Using SSH client type: native
	I1101 16:43:42.741906   17846 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53982 <nil> <nil>}
	I1101 16:43:42.741915   17846 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 16:43:42.862122   17846 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1101 16:43:42.862139   17846 ubuntu.go:71] root file system type: overlay
	I1101 16:43:42.862301   17846 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 16:43:42.862404   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:42.918993   17846 main.go:134] libmachine: Using SSH client type: native
	I1101 16:43:42.919150   17846 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53982 <nil> <nil>}
	I1101 16:43:42.919202   17846 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 16:43:43.047176   17846 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 16:43:43.047302   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:43.105646   17846 main.go:134] libmachine: Using SSH client type: native
	I1101 16:43:43.105794   17846 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53982 <nil> <nil>}
	I1101 16:43:43.105807   17846 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 16:43:43.227490   17846 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 16:43:43.227512   17846 machine.go:91] provisioned docker machine in 1.250385981s
	I1101 16:43:43.227521   17846 start.go:300] post-start starting for "old-k8s-version-163757" (driver="docker")
	I1101 16:43:43.227526   17846 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 16:43:43.227609   17846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 16:43:43.227676   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:43.286997   17846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53982 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:43:43.372386   17846 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 16:43:43.375914   17846 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 16:43:43.375932   17846 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 16:43:43.375939   17846 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 16:43:43.375944   17846 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 16:43:43.375952   17846 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/addons for local assets ...
	I1101 16:43:43.376047   17846 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/files for local assets ...
	I1101 16:43:43.376235   17846 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> 34132.pem in /etc/ssl/certs
	I1101 16:43:43.376434   17846 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 16:43:43.383737   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:43:43.401301   17846 start.go:303] post-start completed in 173.771479ms
	I1101 16:43:43.401403   17846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 16:43:43.401472   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:43.460666   17846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53982 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:43:43.544633   17846 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 16:43:43.548968   17846 fix.go:57] fixHost completed within 2.177897478s
	I1101 16:43:43.548982   17846 start.go:83] releasing machines lock for "old-k8s-version-163757", held for 2.17793886s
	I1101 16:43:43.549079   17846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-163757
	I1101 16:43:43.607666   17846 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1101 16:43:43.607672   17846 ssh_runner.go:195] Run: systemctl --version
	I1101 16:43:43.607745   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:43.607754   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:43.673997   17846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53982 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:43:43.674002   17846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53982 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/old-k8s-version-163757/id_rsa Username:docker}
	I1101 16:43:44.022815   17846 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 16:43:44.032517   17846 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1101 16:43:44.032603   17846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 16:43:44.045237   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 16:43:44.058572   17846 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 16:43:44.121751   17846 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 16:43:44.189921   17846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 16:43:44.261758   17846 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 16:43:44.459068   17846 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:43:44.487935   17846 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:43:44.561187   17846 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	I1101 16:43:44.561388   17846 cli_runner.go:164] Run: docker exec -t old-k8s-version-163757 dig +short host.docker.internal
	I1101 16:43:44.678041   17846 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1101 16:43:44.678174   17846 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1101 16:43:44.683157   17846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 16:43:44.693115   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:44.751767   17846 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 16:43:44.751845   17846 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:43:44.775271   17846 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1101 16:43:44.775304   17846 docker.go:543] Images already preloaded, skipping extraction
	I1101 16:43:44.775402   17846 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:43:44.800283   17846 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1101 16:43:44.800304   17846 cache_images.go:84] Images are preloaded, skipping loading
	I1101 16:43:44.800405   17846 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 16:43:44.870933   17846 cni.go:95] Creating CNI manager for ""
	I1101 16:43:44.870953   17846 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:43:44.870966   17846 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 16:43:44.871001   17846 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163757 NodeName:old-k8s-version-163757 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 16:43:44.871126   17846 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-163757"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-163757
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 16:43:44.871206   17846 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-163757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-163757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 16:43:44.871291   17846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1101 16:43:44.879203   17846 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 16:43:44.879278   17846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 16:43:44.886629   17846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1101 16:43:44.900423   17846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 16:43:44.913511   17846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1101 16:43:44.926123   17846 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 16:43:44.929830   17846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 16:43:44.939611   17846 certs.go:54] Setting up /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757 for IP: 192.168.76.2
	I1101 16:43:44.939741   17846 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key
	I1101 16:43:44.939797   17846 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key
	I1101 16:43:44.939930   17846 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/client.key
	I1101 16:43:44.940008   17846 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.key.31bdca25
	I1101 16:43:44.940071   17846 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.key
	I1101 16:43:44.940309   17846 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem (1338 bytes)
	W1101 16:43:44.940358   17846 certs.go:384] ignoring /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413_empty.pem, impossibly tiny 0 bytes
	I1101 16:43:44.940371   17846 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 16:43:44.940407   17846 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem (1082 bytes)
	I1101 16:43:44.940443   17846 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem (1123 bytes)
	I1101 16:43:44.940480   17846 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem (1675 bytes)
	I1101 16:43:44.940557   17846 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:43:44.941158   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 16:43:44.959154   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 16:43:44.976960   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 16:43:44.994136   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/old-k8s-version-163757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 16:43:45.012087   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 16:43:45.029291   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 16:43:45.047563   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 16:43:45.065992   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 16:43:45.083245   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /usr/share/ca-certificates/34132.pem (1708 bytes)
	I1101 16:43:45.100338   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 16:43:45.117692   17846 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem --> /usr/share/ca-certificates/3413.pem (1338 bytes)
	I1101 16:43:45.134901   17846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 16:43:45.148120   17846 ssh_runner.go:195] Run: openssl version
	I1101 16:43:45.153540   17846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3413.pem && ln -fs /usr/share/ca-certificates/3413.pem /etc/ssl/certs/3413.pem"
	I1101 16:43:45.161829   17846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3413.pem
	I1101 16:43:45.166145   17846 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:49 /usr/share/ca-certificates/3413.pem
	I1101 16:43:45.166220   17846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3413.pem
	I1101 16:43:45.171811   17846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3413.pem /etc/ssl/certs/51391683.0"
	I1101 16:43:45.179049   17846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34132.pem && ln -fs /usr/share/ca-certificates/34132.pem /etc/ssl/certs/34132.pem"
	I1101 16:43:45.187179   17846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34132.pem
	I1101 16:43:45.191084   17846 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:49 /usr/share/ca-certificates/34132.pem
	I1101 16:43:45.191139   17846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34132.pem
	I1101 16:43:45.197415   17846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34132.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 16:43:45.205085   17846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 16:43:45.212873   17846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:43:45.217099   17846 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:43:45.217150   17846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:43:45.222418   17846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 16:43:45.230189   17846 kubeadm.go:396] StartCluster: {Name:old-k8s-version-163757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-163757 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:43:45.230311   17846 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 16:43:45.255201   17846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 16:43:45.264595   17846 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1101 16:43:45.264610   17846 kubeadm.go:627] restartCluster start
	I1101 16:43:45.264676   17846 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 16:43:45.272222   17846 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:45.272322   17846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-163757
	I1101 16:43:45.332266   17846 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-163757" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 16:43:45.332431   17846 kubeconfig.go:146] "old-k8s-version-163757" context is missing from /Users/jenkins/minikube-integration/15232-2108/kubeconfig - will repair!
	I1101 16:43:45.332784   17846 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/kubeconfig: {Name:mka869f80d5e962d9ffa24675c3f5e3e0593fcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:43:45.334182   17846 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 16:43:45.341847   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:45.341900   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:45.350194   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:45.552459   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:45.552633   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:45.563383   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:45.751527   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:45.751754   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:45.762389   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:45.950305   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:45.950397   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:45.960350   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:46.152336   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:46.152525   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:46.163485   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:46.350498   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:46.350615   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:46.361191   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:46.550404   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:46.550625   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:46.560842   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:46.752133   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:46.752246   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:46.762528   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:46.951041   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:46.951232   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:46.962126   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:47.150326   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:47.150423   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:47.162620   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:47.351652   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:47.351777   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:47.361777   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:47.550254   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:47.550366   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:47.559713   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:47.751568   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:47.751680   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:47.761849   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:47.950917   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:47.951087   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:47.961281   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:48.150383   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:48.150589   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:48.163222   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:48.352426   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:48.352549   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:48.363221   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:48.363231   17846 api_server.go:165] Checking apiserver status ...
	I1101 16:43:48.363290   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:43:48.371462   17846 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:43:48.371475   17846 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1101 16:43:48.371483   17846 kubeadm.go:1114] stopping kube-system containers ...
	I1101 16:43:48.371577   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 16:43:48.394632   17846 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 16:43:48.405223   17846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:43:48.412705   17846 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Nov  1 23:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Nov  1 23:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Nov  1 23:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Nov  1 23:40 /etc/kubernetes/scheduler.conf
	
	I1101 16:43:48.412770   17846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 16:43:48.420421   17846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 16:43:48.428102   17846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 16:43:48.437041   17846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 16:43:48.444919   17846 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 16:43:48.452381   17846 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 16:43:48.452399   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:43:48.507353   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:43:49.097324   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:43:49.320438   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:43:49.384320   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:43:49.438771   17846 api_server.go:51] waiting for apiserver process to appear ...
	I1101 16:43:49.438844   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:49.948435   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:50.447937   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:50.948091   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:51.447987   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:51.949809   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:52.447979   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:52.948386   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:53.448055   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:53.947955   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:54.447943   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:54.947900   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:55.448109   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:55.948001   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:56.449921   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:56.949094   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:57.447950   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:57.948396   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:58.448114   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:58.947863   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:59.448535   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:43:59.948007   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:00.449942   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:00.948616   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:01.447943   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:01.948166   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:02.448085   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:02.948057   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:03.448137   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:03.948562   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:04.447901   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:04.948669   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:05.447849   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:05.947898   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:06.447999   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:06.948264   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:07.448053   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:07.948975   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:08.447790   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:08.947749   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:09.448218   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:09.948080   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:10.447962   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:10.948816   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:11.447719   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:11.947828   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:12.448389   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:12.947830   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:13.448541   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:13.948909   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:14.449784   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:14.949196   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:15.447737   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:15.948733   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:16.448214   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:16.947678   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:17.448896   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:17.948344   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:18.447671   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:18.949313   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:19.448552   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:19.948777   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:20.447761   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:20.947734   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:21.447834   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:21.948277   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:22.447709   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:22.948005   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:23.447786   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:23.948324   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:24.447591   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:24.947885   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:25.449001   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:25.947813   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:26.447951   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:26.948188   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:27.447743   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:27.947661   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:28.448240   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:28.947782   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:29.447643   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:29.947768   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:30.447819   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:30.949621   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:31.447962   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:31.947600   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:32.448253   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:32.948350   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:33.449171   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:33.948431   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:34.447712   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:34.949161   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:35.449671   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:35.947527   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:36.449650   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:36.947467   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:37.447981   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:37.948435   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:38.449219   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:38.948789   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:39.449007   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:39.949622   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:40.447411   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:40.949530   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:41.448931   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:41.947383   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:42.447972   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:42.948709   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:43.447384   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:43.949537   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:44.449530   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:44.947700   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:45.448248   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:45.948133   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:46.448076   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:46.948764   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:47.447378   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:47.948030   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:48.447515   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:48.947347   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:49.447412   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:44:49.471590   17846 logs.go:274] 0 containers: []
	W1101 16:44:49.471602   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:44:49.471691   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:44:49.496397   17846 logs.go:274] 0 containers: []
	W1101 16:44:49.496410   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:44:49.496498   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:44:49.520786   17846 logs.go:274] 0 containers: []
	W1101 16:44:49.520799   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:44:49.520887   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:44:49.543028   17846 logs.go:274] 0 containers: []
	W1101 16:44:49.543040   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:44:49.543130   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:44:49.566350   17846 logs.go:274] 0 containers: []
	W1101 16:44:49.566362   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:44:49.566446   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:44:49.588407   17846 logs.go:274] 0 containers: []
	W1101 16:44:49.588419   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:44:49.588500   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:44:49.610719   17846 logs.go:274] 0 containers: []
	W1101 16:44:49.610731   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:44:49.610820   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:44:49.636789   17846 logs.go:274] 0 containers: []
	W1101 16:44:49.636804   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:44:49.636813   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:44:49.636823   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:44:49.654571   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:44:49.654594   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:44:49.712137   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:44:49.712156   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:44:49.712163   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:44:49.726334   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:44:49.726347   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:44:51.779096   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052758728s)
	I1101 16:44:51.779267   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:44:51.779276   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:44:54.318507   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:54.448810   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:44:54.473556   17846 logs.go:274] 0 containers: []
	W1101 16:44:54.473568   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:44:54.473648   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:44:54.497798   17846 logs.go:274] 0 containers: []
	W1101 16:44:54.497810   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:44:54.497889   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:44:54.520401   17846 logs.go:274] 0 containers: []
	W1101 16:44:54.520413   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:44:54.520498   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:44:54.542280   17846 logs.go:274] 0 containers: []
	W1101 16:44:54.542292   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:44:54.542375   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:44:54.565880   17846 logs.go:274] 0 containers: []
	W1101 16:44:54.565893   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:44:54.565994   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:44:54.588748   17846 logs.go:274] 0 containers: []
	W1101 16:44:54.588759   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:44:54.588839   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:44:54.610937   17846 logs.go:274] 0 containers: []
	W1101 16:44:54.610953   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:44:54.611043   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:44:54.634853   17846 logs.go:274] 0 containers: []
	W1101 16:44:54.634866   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:44:54.634875   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:44:54.634890   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:44:54.647102   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:44:54.647115   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:44:54.702471   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:44:54.702484   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:44:54.702491   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:44:54.717816   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:44:54.717832   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:44:56.769600   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051774921s)
	I1101 16:44:56.769718   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:44:56.769725   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:44:59.311863   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:44:59.447329   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:44:59.475794   17846 logs.go:274] 0 containers: []
	W1101 16:44:59.475807   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:44:59.475895   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:44:59.500915   17846 logs.go:274] 0 containers: []
	W1101 16:44:59.500928   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:44:59.501011   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:44:59.527468   17846 logs.go:274] 0 containers: []
	W1101 16:44:59.527481   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:44:59.527578   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:44:59.553041   17846 logs.go:274] 0 containers: []
	W1101 16:44:59.553053   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:44:59.553133   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:44:59.577594   17846 logs.go:274] 0 containers: []
	W1101 16:44:59.577630   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:44:59.577711   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:44:59.603520   17846 logs.go:274] 0 containers: []
	W1101 16:44:59.603536   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:44:59.603632   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:44:59.629833   17846 logs.go:274] 0 containers: []
	W1101 16:44:59.629846   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:44:59.629935   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:44:59.654302   17846 logs.go:274] 0 containers: []
	W1101 16:44:59.654313   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:44:59.654320   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:44:59.654327   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:44:59.700603   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:44:59.700625   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:44:59.714848   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:44:59.714863   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:44:59.774144   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:44:59.774163   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:44:59.774181   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:44:59.789552   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:44:59.789570   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:01.836405   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046843218s)
	I1101 16:45:04.338882   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:04.449185   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:04.487565   17846 logs.go:274] 0 containers: []
	W1101 16:45:04.487580   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:04.487682   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:04.522653   17846 logs.go:274] 0 containers: []
	W1101 16:45:04.522675   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:04.522777   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:04.552245   17846 logs.go:274] 0 containers: []
	W1101 16:45:04.552279   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:04.552448   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:04.576128   17846 logs.go:274] 0 containers: []
	W1101 16:45:04.576142   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:04.576229   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:04.600760   17846 logs.go:274] 0 containers: []
	W1101 16:45:04.600773   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:04.600869   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:04.630654   17846 logs.go:274] 0 containers: []
	W1101 16:45:04.630668   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:04.630764   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:04.656014   17846 logs.go:274] 0 containers: []
	W1101 16:45:04.656027   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:04.656131   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:04.679645   17846 logs.go:274] 0 containers: []
	W1101 16:45:04.679658   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:04.679666   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:04.679673   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:04.725540   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:04.725561   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:04.738854   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:04.738871   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:04.800170   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:04.800181   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:04.800193   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:04.815423   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:04.815437   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:06.868496   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05306157s)
	I1101 16:45:09.368820   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:09.448469   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:09.474442   17846 logs.go:274] 0 containers: []
	W1101 16:45:09.474455   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:09.474546   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:09.497101   17846 logs.go:274] 0 containers: []
	W1101 16:45:09.497113   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:09.497196   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:09.521627   17846 logs.go:274] 0 containers: []
	W1101 16:45:09.521642   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:09.521733   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:09.547665   17846 logs.go:274] 0 containers: []
	W1101 16:45:09.547679   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:09.547765   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:09.572082   17846 logs.go:274] 0 containers: []
	W1101 16:45:09.572095   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:09.572180   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:09.595339   17846 logs.go:274] 0 containers: []
	W1101 16:45:09.595350   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:09.595451   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:09.620618   17846 logs.go:274] 0 containers: []
	W1101 16:45:09.620630   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:09.620721   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:09.644592   17846 logs.go:274] 0 containers: []
	W1101 16:45:09.644604   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:09.644611   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:09.644618   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:09.685521   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:09.685540   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:09.705896   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:09.705916   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:09.767628   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:09.767649   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:09.767656   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:09.783391   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:09.783405   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:11.827834   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044436961s)
	I1101 16:45:14.330338   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:14.447163   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:14.473866   17846 logs.go:274] 0 containers: []
	W1101 16:45:14.473878   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:14.473965   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:14.495139   17846 logs.go:274] 0 containers: []
	W1101 16:45:14.495149   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:14.495231   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:14.518530   17846 logs.go:274] 0 containers: []
	W1101 16:45:14.518542   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:14.518621   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:14.541616   17846 logs.go:274] 0 containers: []
	W1101 16:45:14.541629   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:14.541712   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:14.563623   17846 logs.go:274] 0 containers: []
	W1101 16:45:14.563635   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:14.563735   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:14.586906   17846 logs.go:274] 0 containers: []
	W1101 16:45:14.586921   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:14.587019   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:14.609921   17846 logs.go:274] 0 containers: []
	W1101 16:45:14.609933   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:14.610015   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:14.631851   17846 logs.go:274] 0 containers: []
	W1101 16:45:14.631863   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:14.631873   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:14.631879   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:16.679799   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047924704s)
	I1101 16:45:16.679907   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:16.679921   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:16.718779   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:16.718794   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:16.730580   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:16.730593   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:16.785780   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:16.785791   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:16.785798   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:19.300343   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:19.447879   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:19.476424   17846 logs.go:274] 0 containers: []
	W1101 16:45:19.476437   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:19.476529   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:19.501709   17846 logs.go:274] 0 containers: []
	W1101 16:45:19.501720   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:19.501824   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:19.524821   17846 logs.go:274] 0 containers: []
	W1101 16:45:19.524834   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:19.524917   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:19.549013   17846 logs.go:274] 0 containers: []
	W1101 16:45:19.549025   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:19.549106   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:19.574046   17846 logs.go:274] 0 containers: []
	W1101 16:45:19.574058   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:19.574135   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:19.596794   17846 logs.go:274] 0 containers: []
	W1101 16:45:19.596806   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:19.596887   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:19.618683   17846 logs.go:274] 0 containers: []
	W1101 16:45:19.618695   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:19.618781   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:19.641303   17846 logs.go:274] 0 containers: []
	W1101 16:45:19.641316   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:19.641323   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:19.641331   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:21.691040   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049717996s)
	I1101 16:45:21.691165   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:21.691174   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:21.730032   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:21.730074   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:21.748638   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:21.748651   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:21.824544   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:21.824608   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:21.824618   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:24.346612   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:24.447244   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:24.483561   17846 logs.go:274] 0 containers: []
	W1101 16:45:24.483588   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:24.483685   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:24.510728   17846 logs.go:274] 0 containers: []
	W1101 16:45:24.510741   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:24.510823   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:24.537566   17846 logs.go:274] 0 containers: []
	W1101 16:45:24.537580   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:24.537664   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:24.563381   17846 logs.go:274] 0 containers: []
	W1101 16:45:24.563394   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:24.563491   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:24.592217   17846 logs.go:274] 0 containers: []
	W1101 16:45:24.592235   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:24.592341   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:24.621428   17846 logs.go:274] 0 containers: []
	W1101 16:45:24.621442   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:24.621537   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:24.651852   17846 logs.go:274] 0 containers: []
	W1101 16:45:24.651867   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:24.651973   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:24.684223   17846 logs.go:274] 0 containers: []
	W1101 16:45:24.684239   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:24.684249   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:24.684257   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:24.702674   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:24.702701   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:26.757386   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054684935s)
	I1101 16:45:26.757523   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:26.757533   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:26.807361   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:26.807378   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:26.819744   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:26.819796   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:26.878588   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:29.378785   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:29.447768   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:29.475408   17846 logs.go:274] 0 containers: []
	W1101 16:45:29.475422   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:29.475514   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:29.510201   17846 logs.go:274] 0 containers: []
	W1101 16:45:29.510233   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:29.510344   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:29.545874   17846 logs.go:274] 0 containers: []
	W1101 16:45:29.545887   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:29.545984   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:29.581123   17846 logs.go:274] 0 containers: []
	W1101 16:45:29.581137   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:29.581231   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:29.611690   17846 logs.go:274] 0 containers: []
	W1101 16:45:29.611704   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:29.611787   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:29.637264   17846 logs.go:274] 0 containers: []
	W1101 16:45:29.637278   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:29.637401   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:29.664039   17846 logs.go:274] 0 containers: []
	W1101 16:45:29.664057   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:29.664167   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:29.689812   17846 logs.go:274] 0 containers: []
	W1101 16:45:29.689827   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:29.689837   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:29.689845   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:29.736948   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:29.736974   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:29.756434   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:29.756457   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:29.830284   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:29.830297   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:29.830306   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:29.854320   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:29.854336   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:31.927084   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.072754566s)
	I1101 16:45:34.427389   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:34.447934   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:34.475778   17846 logs.go:274] 0 containers: []
	W1101 16:45:34.475795   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:34.475885   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:34.505872   17846 logs.go:274] 0 containers: []
	W1101 16:45:34.505885   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:34.505976   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:34.536359   17846 logs.go:274] 0 containers: []
	W1101 16:45:34.536371   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:34.536452   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:34.564002   17846 logs.go:274] 0 containers: []
	W1101 16:45:34.564017   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:34.564150   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:34.593476   17846 logs.go:274] 0 containers: []
	W1101 16:45:34.593489   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:34.593587   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:34.620712   17846 logs.go:274] 0 containers: []
	W1101 16:45:34.620725   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:34.620809   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:34.648367   17846 logs.go:274] 0 containers: []
	W1101 16:45:34.648379   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:34.648465   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:34.675672   17846 logs.go:274] 0 containers: []
	W1101 16:45:34.675685   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:34.675692   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:34.675700   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:34.747676   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:34.747688   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:34.747698   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:34.764832   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:34.764848   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:36.824713   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059872499s)
	I1101 16:45:36.824823   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:36.824830   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:36.874057   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:36.874075   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:39.392630   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:39.447149   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:39.473939   17846 logs.go:274] 0 containers: []
	W1101 16:45:39.473952   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:39.474050   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:39.501946   17846 logs.go:274] 0 containers: []
	W1101 16:45:39.501982   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:39.502066   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:39.527279   17846 logs.go:274] 0 containers: []
	W1101 16:45:39.527295   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:39.527402   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:39.550515   17846 logs.go:274] 0 containers: []
	W1101 16:45:39.550529   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:39.550624   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:39.574791   17846 logs.go:274] 0 containers: []
	W1101 16:45:39.574804   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:39.574892   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:39.600538   17846 logs.go:274] 0 containers: []
	W1101 16:45:39.600553   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:39.600641   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:39.624724   17846 logs.go:274] 0 containers: []
	W1101 16:45:39.624737   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:39.624827   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:39.649322   17846 logs.go:274] 0 containers: []
	W1101 16:45:39.649338   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:39.649347   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:39.649355   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:39.720509   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:39.720524   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:39.720531   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:39.738476   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:39.738491   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:41.789807   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051324574s)
	I1101 16:45:41.789918   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:41.789925   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:41.835827   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:41.835847   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:44.350739   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:44.446849   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:44.472301   17846 logs.go:274] 0 containers: []
	W1101 16:45:44.472315   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:44.472401   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:44.494198   17846 logs.go:274] 0 containers: []
	W1101 16:45:44.494210   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:44.494291   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:44.518093   17846 logs.go:274] 0 containers: []
	W1101 16:45:44.518106   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:44.518188   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:44.541862   17846 logs.go:274] 0 containers: []
	W1101 16:45:44.541874   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:44.541954   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:44.563502   17846 logs.go:274] 0 containers: []
	W1101 16:45:44.563514   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:44.563595   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:44.585952   17846 logs.go:274] 0 containers: []
	W1101 16:45:44.585962   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:44.586043   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:44.608562   17846 logs.go:274] 0 containers: []
	W1101 16:45:44.608576   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:44.608658   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:44.631059   17846 logs.go:274] 0 containers: []
	W1101 16:45:44.631070   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:44.631077   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:44.631084   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:44.674534   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:44.674553   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:44.690167   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:44.690182   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:44.749902   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:44.749912   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:44.749920   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:44.764323   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:44.764336   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:46.811037   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046706063s)
	I1101 16:45:49.313415   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:49.446895   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:49.471622   17846 logs.go:274] 0 containers: []
	W1101 16:45:49.471634   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:49.471717   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:49.493255   17846 logs.go:274] 0 containers: []
	W1101 16:45:49.493268   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:49.493349   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:49.519062   17846 logs.go:274] 0 containers: []
	W1101 16:45:49.519074   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:49.519157   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:49.542632   17846 logs.go:274] 0 containers: []
	W1101 16:45:49.542644   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:49.542723   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:49.565711   17846 logs.go:274] 0 containers: []
	W1101 16:45:49.565722   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:49.565820   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:49.588817   17846 logs.go:274] 0 containers: []
	W1101 16:45:49.588829   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:49.588912   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:49.611323   17846 logs.go:274] 0 containers: []
	W1101 16:45:49.611340   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:49.611422   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:49.634612   17846 logs.go:274] 0 containers: []
	W1101 16:45:49.634625   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:49.634785   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:49.634879   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:49.677274   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:49.677290   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:49.689661   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:49.689677   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:49.744209   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:49.744220   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:49.744227   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:49.757992   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:49.758004   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:51.806480   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048483752s)
	I1101 16:45:54.306719   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:54.447019   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:54.473939   17846 logs.go:274] 0 containers: []
	W1101 16:45:54.473953   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:54.474035   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:54.498429   17846 logs.go:274] 0 containers: []
	W1101 16:45:54.498441   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:54.498527   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:54.525017   17846 logs.go:274] 0 containers: []
	W1101 16:45:54.525031   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:54.525133   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:54.548474   17846 logs.go:274] 0 containers: []
	W1101 16:45:54.548485   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:54.548568   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:54.570366   17846 logs.go:274] 0 containers: []
	W1101 16:45:54.570405   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:54.570487   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:54.597820   17846 logs.go:274] 0 containers: []
	W1101 16:45:54.597833   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:54.597922   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:54.623599   17846 logs.go:274] 0 containers: []
	W1101 16:45:54.623610   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:54.623743   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:54.648632   17846 logs.go:274] 0 containers: []
	W1101 16:45:54.648645   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:54.648652   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:54.648660   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:54.691071   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:54.691085   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:54.704081   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:54.704099   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:54.764862   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:54.764878   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:54.764888   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:54.787835   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:54.787871   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:45:56.847956   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060082508s)
	I1101 16:45:59.349130   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:45:59.446867   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:45:59.471662   17846 logs.go:274] 0 containers: []
	W1101 16:45:59.471675   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:45:59.471782   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:45:59.497889   17846 logs.go:274] 0 containers: []
	W1101 16:45:59.497902   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:45:59.497978   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:45:59.521461   17846 logs.go:274] 0 containers: []
	W1101 16:45:59.521474   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:45:59.521571   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:45:59.545577   17846 logs.go:274] 0 containers: []
	W1101 16:45:59.545591   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:45:59.545671   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:45:59.571161   17846 logs.go:274] 0 containers: []
	W1101 16:45:59.571173   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:45:59.571270   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:45:59.595509   17846 logs.go:274] 0 containers: []
	W1101 16:45:59.595522   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:45:59.595605   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:45:59.621053   17846 logs.go:274] 0 containers: []
	W1101 16:45:59.621066   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:45:59.621147   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:45:59.645787   17846 logs.go:274] 0 containers: []
	W1101 16:45:59.645839   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:45:59.645847   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:45:59.645855   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:45:59.693118   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:45:59.693139   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:45:59.708549   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:45:59.708565   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:45:59.771724   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:45:59.771737   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:45:59.771744   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:45:59.787581   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:45:59.787595   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:01.837925   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05033602s)
	I1101 16:46:04.338492   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:04.446718   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:04.472003   17846 logs.go:274] 0 containers: []
	W1101 16:46:04.472015   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:04.472094   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:04.500019   17846 logs.go:274] 0 containers: []
	W1101 16:46:04.500032   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:04.500134   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:04.529368   17846 logs.go:274] 0 containers: []
	W1101 16:46:04.529381   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:04.529471   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:04.559639   17846 logs.go:274] 0 containers: []
	W1101 16:46:04.559659   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:04.559764   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:04.587641   17846 logs.go:274] 0 containers: []
	W1101 16:46:04.587655   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:04.587744   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:04.643285   17846 logs.go:274] 0 containers: []
	W1101 16:46:04.643303   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:04.643404   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:04.686393   17846 logs.go:274] 0 containers: []
	W1101 16:46:04.686425   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:04.686549   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:04.723975   17846 logs.go:274] 0 containers: []
	W1101 16:46:04.723990   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:04.724002   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:04.724011   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:04.781602   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:04.781621   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:04.794019   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:04.794033   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:04.856864   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:04.856880   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:04.856891   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:04.873530   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:04.873548   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:06.931963   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058421022s)
	I1101 16:46:09.432280   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:09.447681   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:09.473932   17846 logs.go:274] 0 containers: []
	W1101 16:46:09.473946   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:09.474118   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:09.498948   17846 logs.go:274] 0 containers: []
	W1101 16:46:09.498959   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:09.499029   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:09.524969   17846 logs.go:274] 0 containers: []
	W1101 16:46:09.524982   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:09.525063   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:09.550611   17846 logs.go:274] 0 containers: []
	W1101 16:46:09.550623   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:09.550707   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:09.579224   17846 logs.go:274] 0 containers: []
	W1101 16:46:09.579235   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:09.579322   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:09.602554   17846 logs.go:274] 0 containers: []
	W1101 16:46:09.602569   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:09.602653   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:09.624126   17846 logs.go:274] 0 containers: []
	W1101 16:46:09.624138   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:09.624221   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:09.647569   17846 logs.go:274] 0 containers: []
	W1101 16:46:09.647581   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:09.647589   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:09.647596   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:09.659373   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:09.659403   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:09.721482   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:09.721494   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:09.721500   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:09.736695   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:09.736708   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:11.790266   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053564738s)
	I1101 16:46:11.790384   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:11.790392   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:14.338132   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:14.448443   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:14.472717   17846 logs.go:274] 0 containers: []
	W1101 16:46:14.472729   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:14.472830   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:14.497656   17846 logs.go:274] 0 containers: []
	W1101 16:46:14.497671   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:14.497759   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:14.523420   17846 logs.go:274] 0 containers: []
	W1101 16:46:14.523435   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:14.523544   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:14.549883   17846 logs.go:274] 0 containers: []
	W1101 16:46:14.549898   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:14.549990   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:14.576962   17846 logs.go:274] 0 containers: []
	W1101 16:46:14.576975   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:14.577072   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:14.604426   17846 logs.go:274] 0 containers: []
	W1101 16:46:14.604443   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:14.604540   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:14.630132   17846 logs.go:274] 0 containers: []
	W1101 16:46:14.630148   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:14.630239   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:14.663031   17846 logs.go:274] 0 containers: []
	W1101 16:46:14.663051   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:14.663059   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:14.663068   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:14.676599   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:14.676613   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:14.743345   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:14.743358   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:14.743365   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:14.759203   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:14.759217   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:16.806984   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047776088s)
	I1101 16:46:16.807099   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:16.807109   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:19.348053   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:19.448542   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:19.473959   17846 logs.go:274] 0 containers: []
	W1101 16:46:19.473972   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:19.474059   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:19.497655   17846 logs.go:274] 0 containers: []
	W1101 16:46:19.497669   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:19.497751   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:19.518994   17846 logs.go:274] 0 containers: []
	W1101 16:46:19.519007   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:19.519095   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:19.541427   17846 logs.go:274] 0 containers: []
	W1101 16:46:19.541438   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:19.541529   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:19.563669   17846 logs.go:274] 0 containers: []
	W1101 16:46:19.563682   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:19.563767   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:19.584528   17846 logs.go:274] 0 containers: []
	W1101 16:46:19.584540   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:19.584620   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:19.607404   17846 logs.go:274] 0 containers: []
	W1101 16:46:19.607416   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:19.607512   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:19.628999   17846 logs.go:274] 0 containers: []
	W1101 16:46:19.629010   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:19.629017   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:19.629024   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:19.669993   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:19.670007   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:19.682348   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:19.682374   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:19.737438   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:19.737448   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:19.737455   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:19.751987   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:19.752000   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:21.807372   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05538164s)
	I1101 16:46:24.308843   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:24.446915   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:24.473466   17846 logs.go:274] 0 containers: []
	W1101 16:46:24.473478   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:24.473594   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:24.496090   17846 logs.go:274] 0 containers: []
	W1101 16:46:24.496103   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:24.496184   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:24.519896   17846 logs.go:274] 0 containers: []
	W1101 16:46:24.519909   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:24.519993   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:24.543668   17846 logs.go:274] 0 containers: []
	W1101 16:46:24.543679   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:24.543762   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:24.565690   17846 logs.go:274] 0 containers: []
	W1101 16:46:24.565716   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:24.565836   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:24.588675   17846 logs.go:274] 0 containers: []
	W1101 16:46:24.588688   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:24.588775   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:24.610473   17846 logs.go:274] 0 containers: []
	W1101 16:46:24.610485   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:24.610568   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:24.632829   17846 logs.go:274] 0 containers: []
	W1101 16:46:24.632839   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:24.632847   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:24.632854   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:24.673391   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:24.673405   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:24.686056   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:24.686070   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:24.742993   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:24.743004   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:24.743011   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:24.758323   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:24.758336   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:26.824609   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.066281177s)
	I1101 16:46:29.325488   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:29.446359   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:29.469170   17846 logs.go:274] 0 containers: []
	W1101 16:46:29.469182   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:29.469262   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:29.491950   17846 logs.go:274] 0 containers: []
	W1101 16:46:29.491963   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:29.492045   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:29.514911   17846 logs.go:274] 0 containers: []
	W1101 16:46:29.514924   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:29.515005   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:29.537265   17846 logs.go:274] 0 containers: []
	W1101 16:46:29.537277   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:29.537363   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:29.558656   17846 logs.go:274] 0 containers: []
	W1101 16:46:29.558667   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:29.558752   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:29.583158   17846 logs.go:274] 0 containers: []
	W1101 16:46:29.583171   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:29.583292   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:29.607082   17846 logs.go:274] 0 containers: []
	W1101 16:46:29.607096   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:29.607215   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:29.636018   17846 logs.go:274] 0 containers: []
	W1101 16:46:29.636030   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:29.636040   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:29.636049   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:29.685936   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:29.685955   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:29.699713   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:29.699729   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:29.755226   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:29.755239   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:29.755246   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:29.769627   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:29.769640   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:31.819345   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049713738s)
	I1101 16:46:34.321710   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:34.446637   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:34.471852   17846 logs.go:274] 0 containers: []
	W1101 16:46:34.471869   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:34.471962   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:34.495027   17846 logs.go:274] 0 containers: []
	W1101 16:46:34.495038   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:34.495120   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:34.516972   17846 logs.go:274] 0 containers: []
	W1101 16:46:34.516985   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:34.517069   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:34.539993   17846 logs.go:274] 0 containers: []
	W1101 16:46:34.540005   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:34.540096   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:34.562674   17846 logs.go:274] 0 containers: []
	W1101 16:46:34.562687   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:34.562771   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:34.586643   17846 logs.go:274] 0 containers: []
	W1101 16:46:34.586654   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:34.586738   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:34.610729   17846 logs.go:274] 0 containers: []
	W1101 16:46:34.610746   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:34.610844   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:34.638456   17846 logs.go:274] 0 containers: []
	W1101 16:46:34.638470   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:34.638478   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:34.638485   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:34.695586   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:34.695598   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:34.695605   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:34.710423   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:34.710453   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:36.761839   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051395077s)
	I1101 16:46:36.761947   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:36.761954   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:36.802461   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:36.802477   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:39.316352   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:39.446219   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:39.470894   17846 logs.go:274] 0 containers: []
	W1101 16:46:39.470907   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:39.470988   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:39.495936   17846 logs.go:274] 0 containers: []
	W1101 16:46:39.495949   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:39.496031   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:39.544877   17846 logs.go:274] 0 containers: []
	W1101 16:46:39.544892   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:39.544996   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:39.570722   17846 logs.go:274] 0 containers: []
	W1101 16:46:39.570733   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:39.570803   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:39.599091   17846 logs.go:274] 0 containers: []
	W1101 16:46:39.599105   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:39.599209   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:39.624150   17846 logs.go:274] 0 containers: []
	W1101 16:46:39.624165   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:39.624257   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:39.656517   17846 logs.go:274] 0 containers: []
	W1101 16:46:39.656533   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:39.656625   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:39.683321   17846 logs.go:274] 0 containers: []
	W1101 16:46:39.683335   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:39.683343   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:39.683350   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:39.732520   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:39.732538   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:39.744903   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:39.744918   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:39.811062   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:39.811073   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:39.811080   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:39.833989   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:39.834006   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:41.889866   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055868678s)
	I1101 16:46:44.390337   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:44.446834   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:44.469843   17846 logs.go:274] 0 containers: []
	W1101 16:46:44.469855   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:44.469935   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:44.492145   17846 logs.go:274] 0 containers: []
	W1101 16:46:44.492157   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:44.492240   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:44.514934   17846 logs.go:274] 0 containers: []
	W1101 16:46:44.514946   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:44.515033   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:44.538172   17846 logs.go:274] 0 containers: []
	W1101 16:46:44.538183   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:44.538269   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:44.560806   17846 logs.go:274] 0 containers: []
	W1101 16:46:44.560818   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:44.560899   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:44.583265   17846 logs.go:274] 0 containers: []
	W1101 16:46:44.583278   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:44.583360   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:44.605417   17846 logs.go:274] 0 containers: []
	W1101 16:46:44.605430   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:44.605524   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:44.628922   17846 logs.go:274] 0 containers: []
	W1101 16:46:44.628935   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:44.628942   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:44.628948   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:44.686180   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:44.686194   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:44.686201   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:44.702735   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:44.702751   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:46.750498   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047754115s)
	I1101 16:46:46.750610   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:46.750618   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:46.792120   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:46.792136   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:49.308672   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:49.446172   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:49.470156   17846 logs.go:274] 0 containers: []
	W1101 16:46:49.470168   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:49.470250   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:49.492591   17846 logs.go:274] 0 containers: []
	W1101 16:46:49.492604   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:49.492687   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:49.516397   17846 logs.go:274] 0 containers: []
	W1101 16:46:49.516408   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:49.516489   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:49.538634   17846 logs.go:274] 0 containers: []
	W1101 16:46:49.538648   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:49.538729   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:49.561776   17846 logs.go:274] 0 containers: []
	W1101 16:46:49.561788   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:49.561871   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:49.583205   17846 logs.go:274] 0 containers: []
	W1101 16:46:49.583218   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:49.583304   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:49.606778   17846 logs.go:274] 0 containers: []
	W1101 16:46:49.606790   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:49.606877   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:49.630011   17846 logs.go:274] 0 containers: []
	W1101 16:46:49.630024   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:49.630032   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:49.630039   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:49.684340   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:49.684353   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:49.684360   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:49.699191   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:49.699205   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:51.746589   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047389853s)
	I1101 16:46:51.746700   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:51.746709   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:51.786124   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:51.786140   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:54.299405   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:54.446225   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:54.470075   17846 logs.go:274] 0 containers: []
	W1101 16:46:54.470088   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:54.470172   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:54.492916   17846 logs.go:274] 0 containers: []
	W1101 16:46:54.492928   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:54.492997   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:54.516292   17846 logs.go:274] 0 containers: []
	W1101 16:46:54.516304   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:54.516387   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:54.538617   17846 logs.go:274] 0 containers: []
	W1101 16:46:54.538628   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:54.538723   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:54.560694   17846 logs.go:274] 0 containers: []
	W1101 16:46:54.560706   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:54.560787   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:54.583371   17846 logs.go:274] 0 containers: []
	W1101 16:46:54.583382   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:54.583467   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:54.607207   17846 logs.go:274] 0 containers: []
	W1101 16:46:54.607220   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:54.607299   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:54.630531   17846 logs.go:274] 0 containers: []
	W1101 16:46:54.630544   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:54.630552   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:46:54.630559   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:46:54.672043   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:46:54.672064   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:46:54.684804   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:46:54.684817   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:46:54.744270   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:46:54.744280   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:54.744287   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:54.759289   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:54.759301   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:46:56.804889   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045594712s)
	I1101 16:46:59.307356   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:46:59.448166   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:46:59.472867   17846 logs.go:274] 0 containers: []
	W1101 16:46:59.472879   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:46:59.472961   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:46:59.495312   17846 logs.go:274] 0 containers: []
	W1101 16:46:59.495324   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:46:59.495404   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:46:59.518110   17846 logs.go:274] 0 containers: []
	W1101 16:46:59.518121   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:46:59.518203   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:46:59.540314   17846 logs.go:274] 0 containers: []
	W1101 16:46:59.540326   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:46:59.540414   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:46:59.562473   17846 logs.go:274] 0 containers: []
	W1101 16:46:59.562484   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:46:59.562580   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:46:59.585404   17846 logs.go:274] 0 containers: []
	W1101 16:46:59.585417   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:46:59.585500   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:46:59.609431   17846 logs.go:274] 0 containers: []
	W1101 16:46:59.609447   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:46:59.609591   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:46:59.632684   17846 logs.go:274] 0 containers: []
	W1101 16:46:59.632695   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:46:59.632702   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:46:59.632709   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:46:59.647863   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:46:59.647878   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:01.695520   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047649613s)
	I1101 16:47:01.695627   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:01.695634   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:01.736225   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:01.736238   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:01.748238   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:01.748251   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:01.803036   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:04.303334   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:04.446601   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:04.470885   17846 logs.go:274] 0 containers: []
	W1101 16:47:04.470898   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:04.470999   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:04.492998   17846 logs.go:274] 0 containers: []
	W1101 16:47:04.493010   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:04.493094   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:04.515078   17846 logs.go:274] 0 containers: []
	W1101 16:47:04.515090   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:04.515172   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:04.536302   17846 logs.go:274] 0 containers: []
	W1101 16:47:04.536313   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:04.536421   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:04.564072   17846 logs.go:274] 0 containers: []
	W1101 16:47:04.564084   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:04.564167   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:04.586916   17846 logs.go:274] 0 containers: []
	W1101 16:47:04.586929   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:04.587009   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:04.608456   17846 logs.go:274] 0 containers: []
	W1101 16:47:04.608467   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:04.608547   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:04.630873   17846 logs.go:274] 0 containers: []
	W1101 16:47:04.630885   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:04.630892   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:04.630901   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:04.669687   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:04.669701   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:04.682393   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:04.682408   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:04.741975   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:04.741985   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:04.741992   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:04.758674   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:04.758689   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:06.821762   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.063080167s)
	I1101 16:47:09.324265   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:09.446991   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:09.471067   17846 logs.go:274] 0 containers: []
	W1101 16:47:09.471079   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:09.471160   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:09.493529   17846 logs.go:274] 0 containers: []
	W1101 16:47:09.493541   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:09.493624   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:09.516105   17846 logs.go:274] 0 containers: []
	W1101 16:47:09.516118   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:09.516199   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:09.537903   17846 logs.go:274] 0 containers: []
	W1101 16:47:09.537915   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:09.537998   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:09.560695   17846 logs.go:274] 0 containers: []
	W1101 16:47:09.560706   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:09.560792   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:09.582649   17846 logs.go:274] 0 containers: []
	W1101 16:47:09.582661   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:09.582742   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:09.608926   17846 logs.go:274] 0 containers: []
	W1101 16:47:09.608938   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:09.609020   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:09.631702   17846 logs.go:274] 0 containers: []
	W1101 16:47:09.631715   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:09.631723   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:09.631730   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:11.680047   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048324679s)
	I1101 16:47:11.680156   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:11.680164   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:11.717834   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:11.717848   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:11.730428   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:11.730441   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:11.783395   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:11.783406   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:11.783413   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:14.297156   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:14.448020   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:14.473025   17846 logs.go:274] 0 containers: []
	W1101 16:47:14.473037   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:14.473119   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:14.494179   17846 logs.go:274] 0 containers: []
	W1101 16:47:14.494191   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:14.494274   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:14.516147   17846 logs.go:274] 0 containers: []
	W1101 16:47:14.516160   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:14.516246   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:14.537878   17846 logs.go:274] 0 containers: []
	W1101 16:47:14.537890   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:14.537973   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:14.559369   17846 logs.go:274] 0 containers: []
	W1101 16:47:14.559381   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:14.559462   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:14.581901   17846 logs.go:274] 0 containers: []
	W1101 16:47:14.581914   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:14.582002   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:14.609217   17846 logs.go:274] 0 containers: []
	W1101 16:47:14.609230   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:14.609312   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:14.632621   17846 logs.go:274] 0 containers: []
	W1101 16:47:14.632633   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:14.632640   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:14.632647   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:14.691394   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:14.691425   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:14.691432   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:14.705251   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:14.705263   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:16.753156   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047900139s)
	I1101 16:47:16.753271   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:16.753279   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:16.797923   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:16.797936   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:19.310215   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:19.446867   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:19.472445   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.472459   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:19.472550   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:19.499506   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.499521   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:19.499605   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:19.521756   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.521767   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:19.521847   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:19.546073   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.546086   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:19.546171   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:19.572096   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.572108   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:19.572241   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:19.603476   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.603491   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:19.603577   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:19.625829   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.625842   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:19.625934   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:19.655243   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.655258   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:19.655267   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:19.655275   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:21.705581   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05031398s)
	I1101 16:47:21.705692   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:21.705700   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:21.746495   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:21.746509   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:21.759332   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:21.759351   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:21.815033   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:21.815049   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:21.815056   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:24.329822   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:24.445763   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:24.470826   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.470839   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:24.470930   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:24.497015   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.497028   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:24.497119   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:24.521282   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.521303   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:24.521395   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:24.545979   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.545993   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:24.546081   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:24.569483   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.569495   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:24.569588   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:24.594426   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.594440   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:24.594523   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:24.618112   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.618125   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:24.618206   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:24.640419   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.640432   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:24.640439   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:24.640447   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:24.682991   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:24.683006   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:24.695361   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:24.695376   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:24.753273   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:24.753285   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:24.753293   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:24.769729   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:24.769746   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:26.830871   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061132216s)
	I1101 16:47:29.333235   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:29.446224   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:29.482027   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.482041   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:29.482127   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:29.506070   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.506083   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:29.506169   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:29.530095   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.530109   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:29.530205   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:29.556851   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.556864   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:29.556954   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:29.581914   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.581930   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:29.582030   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:29.611188   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.611210   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:29.611307   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:29.639823   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.639841   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:29.639941   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:29.691297   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.691315   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:29.691326   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:29.691337   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:29.736057   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:29.736076   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:29.751713   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:29.751734   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:29.826348   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:29.826361   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:29.826368   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:29.842465   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:29.842477   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:31.900987   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058517795s)
	I1101 16:47:34.401536   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:34.445880   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:34.472071   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.472083   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:34.472165   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:34.493215   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.493226   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:34.493308   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:34.515884   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.515896   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:34.515984   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:34.538600   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.538612   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:34.538693   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:34.561138   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.561150   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:34.561230   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:34.584730   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.584743   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:34.584825   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:34.606367   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.606379   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:34.606459   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:34.629424   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.629437   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:34.629444   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:34.629451   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:34.641194   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:34.641209   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:34.696267   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:34.696293   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:34.696300   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:34.710093   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:34.710106   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:36.755469   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045370905s)
	I1101 16:47:36.755589   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:36.755597   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:39.294279   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:39.447242   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:39.470574   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.470587   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:39.470667   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:39.493942   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.493955   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:39.494040   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:39.516633   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.516645   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:39.516727   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:39.538567   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.538580   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:39.538662   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:39.560384   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.560397   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:39.560479   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:39.584749   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.584761   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:39.584842   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:39.607441   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.607452   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:39.607534   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:39.629623   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.629636   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:39.629643   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:39.629649   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:39.671563   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:39.671577   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:39.683864   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:39.683877   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:39.738956   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:39.738966   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:39.738974   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:39.753740   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:39.753755   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:41.801816   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048067899s)
	I1101 16:47:44.302583   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:44.445799   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:44.472158   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.472169   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:44.472253   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:44.495116   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.495128   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:44.495210   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:44.519031   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.519044   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:44.519124   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:44.542423   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.542436   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:44.542522   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:44.566200   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.566216   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:44.566304   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:44.593754   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.593766   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:44.593849   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:44.619413   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.619441   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:44.619567   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:44.642879   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.642894   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:44.642903   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:44.642914   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:44.658068   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:44.658083   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:44.716628   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:44.716646   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:44.716653   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:44.731430   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:44.731442   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:46.779191   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047757292s)
	I1101 16:47:46.779300   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:46.779308   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:49.319955   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:49.445640   17846 kubeadm.go:631] restartCluster took 4m4.183481018s
	W1101 16:47:49.445791   17846 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I1101 16:47:49.445813   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1101 16:47:49.871351   17846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 16:47:49.880900   17846 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 16:47:49.889036   17846 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 16:47:49.889102   17846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:47:49.896653   17846 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 16:47:49.896679   17846 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 16:47:49.942356   17846 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1101 16:47:49.942402   17846 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 16:47:50.243643   17846 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 16:47:50.243736   17846 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 16:47:50.243834   17846 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 16:47:50.464921   17846 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 16:47:50.466503   17846 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 16:47:50.473447   17846 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1101 16:47:50.541678   17846 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 16:47:50.562555   17846 out.go:204]   - Generating certificates and keys ...
	I1101 16:47:50.562630   17846 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 16:47:50.562721   17846 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 16:47:50.562794   17846 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 16:47:50.562898   17846 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 16:47:50.563021   17846 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 16:47:50.563092   17846 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 16:47:50.563167   17846 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 16:47:50.563245   17846 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 16:47:50.563351   17846 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 16:47:50.563446   17846 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 16:47:50.563482   17846 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 16:47:50.563539   17846 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 16:47:50.640384   17846 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 16:47:50.765850   17846 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 16:47:50.844605   17846 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 16:47:51.150107   17846 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 16:47:51.150651   17846 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 16:47:51.172236   17846 out.go:204]   - Booting up control plane ...
	I1101 16:47:51.172365   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 16:47:51.172435   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 16:47:51.172497   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 16:47:51.172558   17846 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 16:47:51.172715   17846 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 16:48:31.132519   17846 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 16:48:31.133304   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:48:31.133805   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:48:36.130239   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:48:36.130466   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:48:46.123464   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:48:46.123623   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:49:06.110432   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:49:06.110727   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:49:46.082375   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:49:46.082630   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:49:46.082648   17846 kubeadm.go:317] 
	I1101 16:49:46.082688   17846 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 16:49:46.082728   17846 kubeadm.go:317] 	timed out waiting for the condition
	I1101 16:49:46.082736   17846 kubeadm.go:317] 
	I1101 16:49:46.082795   17846 kubeadm.go:317] This error is likely caused by:
	I1101 16:49:46.082882   17846 kubeadm.go:317] 	- The kubelet is not running
	I1101 16:49:46.083011   17846 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 16:49:46.083026   17846 kubeadm.go:317] 
	I1101 16:49:46.083139   17846 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 16:49:46.083168   17846 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 16:49:46.083194   17846 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 16:49:46.083198   17846 kubeadm.go:317] 
	I1101 16:49:46.083283   17846 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 16:49:46.083361   17846 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1101 16:49:46.083444   17846 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1101 16:49:46.083495   17846 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1101 16:49:46.083557   17846 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 16:49:46.083584   17846 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1101 16:49:46.085941   17846 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 16:49:46.086055   17846 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1101 16:49:46.086140   17846 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 16:49:46.086212   17846 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 16:49:46.086267   17846 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1101 16:49:46.086408   17846 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1101 16:49:46.086434   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1101 16:49:46.510880   17846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 16:49:46.521500   17846 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 16:49:46.521571   17846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:49:46.528942   17846 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 16:49:46.528965   17846 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 16:49:46.575769   17846 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1101 16:49:46.575851   17846 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 16:49:46.870407   17846 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 16:49:46.870491   17846 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 16:49:46.870571   17846 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 16:49:47.092717   17846 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 16:49:47.093492   17846 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 16:49:47.100457   17846 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1101 16:49:47.174081   17846 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 16:49:47.195741   17846 out.go:204]   - Generating certificates and keys ...
	I1101 16:49:47.195829   17846 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 16:49:47.195890   17846 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 16:49:47.195970   17846 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 16:49:47.196041   17846 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 16:49:47.196103   17846 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 16:49:47.196146   17846 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 16:49:47.196202   17846 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 16:49:47.196281   17846 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 16:49:47.196364   17846 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 16:49:47.196420   17846 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 16:49:47.196452   17846 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 16:49:47.196504   17846 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 16:49:47.348838   17846 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 16:49:47.486404   17846 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 16:49:47.568476   17846 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 16:49:47.627774   17846 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 16:49:47.628523   17846 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 16:49:47.670718   17846 out.go:204]   - Booting up control plane ...
	I1101 16:49:47.670811   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 16:49:47.670875   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 16:49:47.670930   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 16:49:47.671079   17846 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 16:49:47.671263   17846 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 16:50:27.611062   17846 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 16:50:27.612231   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:50:27.612522   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:50:32.608994   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:50:32.609156   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:50:42.603145   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:50:42.603381   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:51:02.590328   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:51:02.590555   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:51:42.562423   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:51:42.562742   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:51:42.562757   17846 kubeadm.go:317] 
	I1101 16:51:42.562810   17846 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 16:51:42.562880   17846 kubeadm.go:317] 	timed out waiting for the condition
	I1101 16:51:42.562887   17846 kubeadm.go:317] 
	I1101 16:51:42.562945   17846 kubeadm.go:317] This error is likely caused by:
	I1101 16:51:42.563004   17846 kubeadm.go:317] 	- The kubelet is not running
	I1101 16:51:42.563135   17846 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 16:51:42.563149   17846 kubeadm.go:317] 
	I1101 16:51:42.563253   17846 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 16:51:42.563286   17846 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 16:51:42.563325   17846 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 16:51:42.563333   17846 kubeadm.go:317] 
	I1101 16:51:42.563431   17846 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 16:51:42.563556   17846 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1101 16:51:42.563639   17846 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1101 16:51:42.563681   17846 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1101 16:51:42.563741   17846 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 16:51:42.563767   17846 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1101 16:51:42.567302   17846 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 16:51:42.567408   17846 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1101 16:51:42.567496   17846 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 16:51:42.567572   17846 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 16:51:42.567632   17846 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1101 16:51:42.567668   17846 kubeadm.go:398] StartCluster complete in 7m57.342308605s
	I1101 16:51:42.567763   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:51:42.590192   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.590204   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:51:42.590288   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:51:42.614234   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.614247   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:51:42.614333   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:51:42.637462   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.637474   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:51:42.637556   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:51:42.659360   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.659370   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:51:42.659453   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:51:42.682378   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.682393   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:51:42.682478   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:51:42.705215   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.705227   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:51:42.705312   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:51:42.727584   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.727596   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:51:42.727677   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:51:42.750243   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.750254   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:51:42.750262   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:51:42.750269   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:51:42.788804   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:51:42.788818   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:51:42.801590   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:51:42.801603   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:51:42.855623   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:51:42.855634   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:51:42.855640   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:51:42.869204   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:51:42.869216   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:51:44.921375   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052166745s)
	W1101 16:51:44.921496   17846 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1101 16:51:44.921510   17846 out.go:239] * 
	* 
	W1101 16:51:44.921638   17846 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 16:51:44.921655   17846 out.go:239] * 
	* 
	W1101 16:51:44.922287   17846 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 16:51:44.988165   17846 out.go:177] 
	W1101 16:51:45.032349   17846 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 16:51:45.032491   17846 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1101 16:51:45.032597   17846 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1101 16:51:45.075179   17846 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-163757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-163757
helpers_test.go:235: (dbg) docker inspect old-k8s-version-163757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e",
	        "Created": "2022-11-01T23:38:04.256272958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274043,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:43:41.854152852Z",
	            "FinishedAt": "2022-11-01T23:43:38.949849093Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hostname",
	        "HostsPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hosts",
	        "LogPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e-json.log",
	        "Name": "/old-k8s-version-163757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-163757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163757",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a78b66e11436bdcef5ae4e878d76bd762a44be207b062530209a62e8ac180eb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53982"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53983"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53984"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53981"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6a78b66e1143",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-163757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "68479d844c03",
	                        "old-k8s-version-163757"
	                    ],
	                    "NetworkID": "de11f6b0d4a3e9909764ae953f0f910d0d29438f96300416f12a7f896caa0f32",
	                    "EndpointID": "d35627681b46bada56d185972cf0b735b505b074234eaf79ad5bd6396bcc6bec",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 2 (447.975495ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-163757 logs -n 25
E1101 16:51:48.337846    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-163757 logs -n 25: (3.819306899s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubenet-161858                                 | kubenet-161858         | jenkins | v1.27.1 | 01 Nov 22 16:37 PDT | 01 Nov 22 16:37 PDT |
	|         | --memory=2048                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                        |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	| ssh     | -p calico-161859 pgrep -a                         | calico-161859          | jenkins | v1.27.1 | 01 Nov 22 16:37 PDT | 01 Nov 22 16:37 PDT |
	|         | kubelet                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-161858 pgrep -a                        | kubenet-161858         | jenkins | v1.27.1 | 01 Nov 22 16:37 PDT | 01 Nov 22 16:37 PDT |
	|         | kubelet                                           |                        |         |         |                     |                     |
	| delete  | -p calico-161859                                  | calico-161859          | jenkins | v1.27.1 | 01 Nov 22 16:37 PDT | 01 Nov 22 16:37 PDT |
	| start   | -p old-k8s-version-163757                         | old-k8s-version-163757 | jenkins | v1.27.1 | 01 Nov 22 16:37 PDT |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| delete  | -p kubenet-161858                                 | kubenet-161858         | jenkins | v1.27.1 | 01 Nov 22 16:39 PDT | 01 Nov 22 16:39 PDT |
	| start   | -p no-preload-163909                              | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:39 PDT | 01 Nov 22 16:40 PDT |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-163909        | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:40 PDT | 01 Nov 22 16:40 PDT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-163909                              | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:40 PDT | 01 Nov 22 16:40 PDT |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-163909             | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:40 PDT | 01 Nov 22 16:40 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-163909                              | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:40 PDT | 01 Nov 22 16:45 PDT |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163757   | old-k8s-version-163757 | jenkins | v1.27.1 | 01 Nov 22 16:42 PDT |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-163757                         | old-k8s-version-163757 | jenkins | v1.27.1 | 01 Nov 22 16:43 PDT | 01 Nov 22 16:43 PDT |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163757        | old-k8s-version-163757 | jenkins | v1.27.1 | 01 Nov 22 16:43 PDT | 01 Nov 22 16:43 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-163757                         | old-k8s-version-163757 | jenkins | v1.27.1 | 01 Nov 22 16:43 PDT |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-163909 sudo                         | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:45 PDT | 01 Nov 22 16:45 PDT |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-163909                              | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:45 PDT | 01 Nov 22 16:45 PDT |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-163909                              | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:45 PDT | 01 Nov 22 16:45 PDT |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-163909                              | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:45 PDT | 01 Nov 22 16:45 PDT |
	| delete  | -p no-preload-163909                              | no-preload-163909      | jenkins | v1.27.1 | 01 Nov 22 16:46 PDT | 01 Nov 22 16:46 PDT |
	| start   | -p embed-certs-164600                             | embed-certs-164600     | jenkins | v1.27.1 | 01 Nov 22 16:46 PDT | 01 Nov 22 16:46 PDT |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-164600       | embed-certs-164600     | jenkins | v1.27.1 | 01 Nov 22 16:47 PDT | 01 Nov 22 16:47 PDT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-164600                             | embed-certs-164600     | jenkins | v1.27.1 | 01 Nov 22 16:47 PDT | 01 Nov 22 16:47 PDT |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-164600            | embed-certs-164600     | jenkins | v1.27.1 | 01 Nov 22 16:47 PDT | 01 Nov 22 16:47 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-164600                             | embed-certs-164600     | jenkins | v1.27.1 | 01 Nov 22 16:47 PDT |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 16:47:16
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 16:47:16.155876   18448 out.go:296] Setting OutFile to fd 1 ...
	I1101 16:47:16.156136   18448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:47:16.156141   18448 out.go:309] Setting ErrFile to fd 2...
	I1101 16:47:16.156145   18448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:47:16.156303   18448 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 16:47:16.156892   18448 out.go:303] Setting JSON to false
	I1101 16:47:16.176677   18448 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4611,"bootTime":1667341825,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 16:47:16.176773   18448 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 16:47:16.198382   18448 out.go:177] * [embed-certs-164600] minikube v1.27.1 on Darwin 13.0
	I1101 16:47:16.242431   18448 notify.go:220] Checking for updates...
	I1101 16:47:16.263975   18448 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 16:47:16.305949   18448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 16:47:16.327271   18448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 16:47:16.349374   18448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 16:47:16.371408   18448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 16:47:16.393830   18448 config.go:180] Loaded profile config "embed-certs-164600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 16:47:16.394513   18448 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 16:47:16.457479   18448 docker.go:137] docker version: linux-20.10.20
	I1101 16:47:16.457639   18448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:47:16.598211   18448 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-01 23:47:16.525429028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:47:16.620262   18448 out.go:177] * Using the docker driver based on existing profile
	I1101 16:47:16.642968   18448 start.go:282] selected driver: docker
	I1101 16:47:16.642999   18448 start.go:808] validating driver "docker" against &{Name:embed-certs-164600 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-164600 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mo
untString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:47:16.643133   18448 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 16:47:16.646911   18448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 16:47:16.795743   18448 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-01 23:47:16.715800314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 16:47:16.795970   18448 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 16:47:16.796004   18448 cni.go:95] Creating CNI manager for ""
	I1101 16:47:16.796052   18448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:47:16.796068   18448 start_flags.go:317] config:
	{Name:embed-certs-164600 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-164600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:47:16.839842   18448 out.go:177] * Starting control plane node embed-certs-164600 in cluster embed-certs-164600
	I1101 16:47:16.860712   18448 cache.go:120] Beginning downloading kic base image for docker with docker
	I1101 16:47:16.881877   18448 out.go:177] * Pulling base image ...
	I1101 16:47:16.923799   18448 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1101 16:47:16.923828   18448 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 16:47:16.923899   18448 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1101 16:47:16.923918   18448 cache.go:57] Caching tarball of preloaded images
	I1101 16:47:16.924159   18448 preload.go:174] Found /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 16:47:16.924180   18448 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1101 16:47:16.925144   18448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600/config.json ...
	I1101 16:47:16.980387   18448 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 16:47:16.980407   18448 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 16:47:16.980416   18448 cache.go:208] Successfully downloaded all kic artifacts
	I1101 16:47:16.980491   18448 start.go:364] acquiring machines lock for embed-certs-164600: {Name:mk1a0b9289717da769d02641f933de5a09606cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 16:47:16.980584   18448 start.go:368] acquired machines lock for "embed-certs-164600" in 67.589µs
	I1101 16:47:16.980619   18448 start.go:96] Skipping create...Using existing machine configuration
	I1101 16:47:16.980630   18448 fix.go:55] fixHost starting: 
	I1101 16:47:16.980880   18448 cli_runner.go:164] Run: docker container inspect embed-certs-164600 --format={{.State.Status}}
	I1101 16:47:17.037554   18448 fix.go:103] recreateIfNeeded on embed-certs-164600: state=Stopped err=<nil>
	W1101 16:47:17.037583   18448 fix.go:129] unexpected machine state, will restart: <nil>
	I1101 16:47:17.059440   18448 out.go:177] * Restarting existing docker container for "embed-certs-164600" ...
	I1101 16:47:17.080404   18448 cli_runner.go:164] Run: docker start embed-certs-164600
	I1101 16:47:17.416540   18448 cli_runner.go:164] Run: docker container inspect embed-certs-164600 --format={{.State.Status}}
	I1101 16:47:17.478277   18448 kic.go:415] container "embed-certs-164600" state is running.
	I1101 16:47:17.478957   18448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-164600
	I1101 16:47:17.545337   18448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600/config.json ...
	I1101 16:47:17.545764   18448 machine.go:88] provisioning docker machine ...
	I1101 16:47:17.545792   18448 ubuntu.go:169] provisioning hostname "embed-certs-164600"
	I1101 16:47:17.545893   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:17.619797   18448 main.go:134] libmachine: Using SSH client type: native
	I1101 16:47:17.620045   18448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54140 <nil> <nil>}
	I1101 16:47:17.620059   18448 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-164600 && echo "embed-certs-164600" | sudo tee /etc/hostname
	I1101 16:47:17.760471   18448 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-164600
	
	I1101 16:47:17.760610   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:17.879693   18448 main.go:134] libmachine: Using SSH client type: native
	I1101 16:47:17.879891   18448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54140 <nil> <nil>}
	I1101 16:47:17.879903   18448 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-164600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-164600/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-164600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 16:47:17.999993   18448 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 16:47:18.000011   18448 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15232-2108/.minikube CaCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15232-2108/.minikube}
	I1101 16:47:18.000027   18448 ubuntu.go:177] setting up certificates
	I1101 16:47:18.000037   18448 provision.go:83] configureAuth start
	I1101 16:47:18.000133   18448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-164600
	I1101 16:47:18.062118   18448 provision.go:138] copyHostCerts
	I1101 16:47:18.062302   18448 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem, removing ...
	I1101 16:47:18.062333   18448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
	I1101 16:47:18.062492   18448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem (1082 bytes)
	I1101 16:47:18.062802   18448 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem, removing ...
	I1101 16:47:18.062810   18448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
	I1101 16:47:18.062922   18448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem (1123 bytes)
	I1101 16:47:18.063200   18448 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem, removing ...
	I1101 16:47:18.063211   18448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
	I1101 16:47:18.063277   18448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem (1675 bytes)
	I1101 16:47:18.063407   18448 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem org=jenkins.embed-certs-164600 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-164600]
	I1101 16:47:18.151791   18448 provision.go:172] copyRemoteCerts
	I1101 16:47:18.151877   18448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 16:47:18.151946   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:18.215472   18448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54140 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/embed-certs-164600/id_rsa Username:docker}
	I1101 16:47:18.301967   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 16:47:18.321093   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 16:47:18.342241   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 16:47:18.362945   18448 provision.go:86] duration metric: configureAuth took 362.891846ms
	I1101 16:47:18.362962   18448 ubuntu.go:193] setting minikube options for container-runtime
	I1101 16:47:18.363172   18448 config.go:180] Loaded profile config "embed-certs-164600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 16:47:18.363269   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:18.430016   18448 main.go:134] libmachine: Using SSH client type: native
	I1101 16:47:18.430202   18448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54140 <nil> <nil>}
	I1101 16:47:18.430211   18448 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 16:47:18.554997   18448 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1101 16:47:18.555010   18448 ubuntu.go:71] root file system type: overlay
	I1101 16:47:18.555209   18448 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 16:47:18.555310   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:18.617463   18448 main.go:134] libmachine: Using SSH client type: native
	I1101 16:47:18.617647   18448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54140 <nil> <nil>}
	I1101 16:47:18.617699   18448 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 16:47:18.749744   18448 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 16:47:18.749860   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:18.807244   18448 main.go:134] libmachine: Using SSH client type: native
	I1101 16:47:18.807399   18448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54140 <nil> <nil>}
	I1101 16:47:18.807412   18448 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 16:47:18.927878   18448 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 16:47:18.927896   18448 machine.go:91] provisioned docker machine in 1.382138282s
	I1101 16:47:18.927907   18448 start.go:300] post-start starting for "embed-certs-164600" (driver="docker")
	I1101 16:47:18.927912   18448 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 16:47:18.927992   18448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 16:47:18.928063   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:18.990049   18448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54140 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/embed-certs-164600/id_rsa Username:docker}
	I1101 16:47:19.077584   18448 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 16:47:19.081316   18448 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 16:47:19.081333   18448 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 16:47:19.081340   18448 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 16:47:19.081344   18448 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 16:47:19.081352   18448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/addons for local assets ...
	I1101 16:47:19.081452   18448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/files for local assets ...
	I1101 16:47:19.081630   18448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> 34132.pem in /etc/ssl/certs
	I1101 16:47:19.081821   18448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 16:47:19.089362   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:47:19.108980   18448 start.go:303] post-start completed in 181.06546ms
	I1101 16:47:19.109068   18448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 16:47:19.109166   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:19.172036   18448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54140 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/embed-certs-164600/id_rsa Username:docker}
	I1101 16:47:19.256319   18448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 16:47:19.260828   18448 fix.go:57] fixHost completed within 2.280222409s
	I1101 16:47:19.260838   18448 start.go:83] releasing machines lock for "embed-certs-164600", held for 2.280270749s
	I1101 16:47:19.260943   18448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-164600
	I1101 16:47:19.320469   18448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 16:47:19.320470   18448 ssh_runner.go:195] Run: systemctl --version
	I1101 16:47:19.320552   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:19.320571   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:19.383031   18448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54140 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/embed-certs-164600/id_rsa Username:docker}
	I1101 16:47:19.385011   18448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54140 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/embed-certs-164600/id_rsa Username:docker}
	I1101 16:47:19.468822   18448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 16:47:19.527911   18448 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1101 16:47:19.528025   18448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 16:47:19.542555   18448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 16:47:19.557006   18448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 16:47:19.635013   18448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 16:47:19.706310   18448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 16:47:19.777487   18448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 16:47:20.028630   18448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 16:47:20.104823   18448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 16:47:20.172870   18448 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1101 16:47:20.182920   18448 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 16:47:20.183011   18448 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 16:47:20.187152   18448 start.go:472] Will wait 60s for crictl version
	I1101 16:47:20.187203   18448 ssh_runner.go:195] Run: sudo crictl version
	I1101 16:47:20.286846   18448 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1101 16:47:20.286954   18448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:47:20.314760   18448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 16:47:16.753156   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047900139s)
	I1101 16:47:16.753271   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:16.753279   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:16.797923   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:16.797936   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:19.310215   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:19.446867   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:19.472445   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.472459   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:19.472550   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:19.499506   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.499521   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:19.499605   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:19.521756   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.521767   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:19.521847   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:19.546073   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.546086   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:19.546171   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:19.572096   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.572108   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:19.572241   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:19.603476   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.603491   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:19.603577   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:19.625829   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.625842   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:19.625934   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:19.655243   17846 logs.go:274] 0 containers: []
	W1101 16:47:19.655258   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:19.655267   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:19.655275   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:20.387131   18448 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1101 16:47:20.387322   18448 cli_runner.go:164] Run: docker exec -t embed-certs-164600 dig +short host.docker.internal
	I1101 16:47:20.519910   18448 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1101 16:47:20.520069   18448 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1101 16:47:20.525567   18448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 16:47:20.535310   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:20.596051   18448 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1101 16:47:20.596135   18448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:47:20.621230   18448 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1101 16:47:20.621246   18448 docker.go:543] Images already preloaded, skipping extraction
	I1101 16:47:20.621339   18448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 16:47:20.645450   18448 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1101 16:47:20.645477   18448 cache_images.go:84] Images are preloaded, skipping loading
	I1101 16:47:20.645631   18448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 16:47:20.715735   18448 cni.go:95] Creating CNI manager for ""
	I1101 16:47:20.715750   18448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:47:20.715767   18448 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 16:47:20.715783   18448 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-164600 NodeName:embed-certs-164600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 16:47:20.715918   18448 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-164600"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 16:47:20.716005   18448 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-164600 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:embed-certs-164600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 16:47:20.716094   18448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1101 16:47:20.723926   18448 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 16:47:20.723990   18448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 16:47:20.731066   18448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I1101 16:47:20.744855   18448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 16:47:20.757692   18448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2040 bytes)
	I1101 16:47:20.770965   18448 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1101 16:47:20.774912   18448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 16:47:20.784400   18448 certs.go:54] Setting up /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600 for IP: 192.168.67.2
	I1101 16:47:20.784522   18448 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key
	I1101 16:47:20.784576   18448 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key
	I1101 16:47:20.784666   18448 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600/client.key
	I1101 16:47:20.784731   18448 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600/apiserver.key.c7fa3a9e
	I1101 16:47:20.784791   18448 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600/proxy-client.key
	I1101 16:47:20.785029   18448 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem (1338 bytes)
	W1101 16:47:20.785085   18448 certs.go:384] ignoring /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413_empty.pem, impossibly tiny 0 bytes
	I1101 16:47:20.785099   18448 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 16:47:20.785137   18448 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem (1082 bytes)
	I1101 16:47:20.785173   18448 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem (1123 bytes)
	I1101 16:47:20.785211   18448 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem (1675 bytes)
	I1101 16:47:20.785285   18448 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem (1708 bytes)
	I1101 16:47:20.785816   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 16:47:20.803374   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 16:47:20.820702   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 16:47:20.838282   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/embed-certs-164600/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 16:47:20.855815   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 16:47:20.873041   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 16:47:20.890152   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 16:47:20.907196   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 16:47:20.924179   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem --> /usr/share/ca-certificates/3413.pem (1338 bytes)
	I1101 16:47:20.941310   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /usr/share/ca-certificates/34132.pem (1708 bytes)
	I1101 16:47:20.958617   18448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 16:47:20.976422   18448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 16:47:20.989476   18448 ssh_runner.go:195] Run: openssl version
	I1101 16:47:20.995161   18448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3413.pem && ln -fs /usr/share/ca-certificates/3413.pem /etc/ssl/certs/3413.pem"
	I1101 16:47:21.003045   18448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3413.pem
	I1101 16:47:21.006943   18448 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:49 /usr/share/ca-certificates/3413.pem
	I1101 16:47:21.007003   18448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3413.pem
	I1101 16:47:21.012470   18448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3413.pem /etc/ssl/certs/51391683.0"
	I1101 16:47:21.019816   18448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34132.pem && ln -fs /usr/share/ca-certificates/34132.pem /etc/ssl/certs/34132.pem"
	I1101 16:47:21.027994   18448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34132.pem
	I1101 16:47:21.032187   18448 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:49 /usr/share/ca-certificates/34132.pem
	I1101 16:47:21.032240   18448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34132.pem
	I1101 16:47:21.037494   18448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34132.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 16:47:21.044989   18448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 16:47:21.052891   18448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:47:21.056795   18448 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:47:21.056845   18448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 16:47:21.061964   18448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 16:47:21.068919   18448 kubeadm.go:396] StartCluster: {Name:embed-certs-164600 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-164600 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 16:47:21.069036   18448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 16:47:21.092216   18448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 16:47:21.099938   18448 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1101 16:47:21.099953   18448 kubeadm.go:627] restartCluster start
	I1101 16:47:21.100006   18448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 16:47:21.106750   18448 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:21.106845   18448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-164600
	I1101 16:47:21.705581   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05031398s)
	I1101 16:47:21.705692   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:21.705700   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:21.746495   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:21.746509   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:21.759332   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:21.759351   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:21.815033   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:21.815049   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:21.815056   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:24.329822   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:24.445763   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:24.470826   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.470839   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:24.470930   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:24.497015   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.497028   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:24.497119   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:24.521282   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.521303   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:24.521395   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:24.545979   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.545993   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:24.546081   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:24.569483   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.569495   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:24.569588   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:24.594426   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.594440   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:24.594523   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:24.618112   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.618125   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:24.618206   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:24.640419   17846 logs.go:274] 0 containers: []
	W1101 16:47:24.640432   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:24.640439   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:24.640447   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:24.682991   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:24.683006   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:24.695361   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:24.695376   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:24.753273   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:24.753285   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:24.753293   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:24.769729   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:24.769746   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:21.186506   18448 kubeconfig.go:135] verify returned: extract IP: "embed-certs-164600" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 16:47:21.186686   18448 kubeconfig.go:146] "embed-certs-164600" context is missing from /Users/jenkins/minikube-integration/15232-2108/kubeconfig - will repair!
	I1101 16:47:21.187005   18448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/kubeconfig: {Name:mka869f80d5e962d9ffa24675c3f5e3e0593fcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 16:47:21.188238   18448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 16:47:21.196046   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:21.196112   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:21.204226   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:21.405074   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:21.405204   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:21.417300   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:21.604424   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:21.604658   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:21.615172   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:21.804677   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:21.804790   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:21.815138   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:22.004320   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:22.004393   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:22.013535   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:22.204348   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:22.204479   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:22.215392   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:22.405270   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:22.405487   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:22.416261   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:22.606359   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:22.606511   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:22.617697   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:22.804395   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:22.804559   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:22.816035   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:23.006353   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:23.006518   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:23.017138   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:23.206424   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:23.206551   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:23.217485   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:23.406096   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:23.406344   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:23.417407   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:23.604286   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:23.604405   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:23.615517   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:23.806357   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:23.806544   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:23.817564   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:24.006089   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:24.006249   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:24.017368   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:24.206310   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:24.206505   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:24.217396   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:24.217406   18448 api_server.go:165] Checking apiserver status ...
	I1101 16:47:24.217466   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 16:47:24.225696   18448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:24.225707   18448 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1101 16:47:24.225714   18448 kubeadm.go:1114] stopping kube-system containers ...
	I1101 16:47:24.225801   18448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 16:47:24.248912   18448 docker.go:444] Stopping containers: [773f082b8d73 a45543f0c17d c2f03180a90c 7064bbad1de5 5506a4fbc527 1588d078c8aa a6a202f445e0 7a3aedb23394 d50c5f845ad3 0507a4e06004 c96df9881b14 c2757173c670 e8d026a0e299 85ee0c1cd8af 3f224b59d6f5 76513ae37232]
	I1101 16:47:24.249012   18448 ssh_runner.go:195] Run: docker stop 773f082b8d73 a45543f0c17d c2f03180a90c 7064bbad1de5 5506a4fbc527 1588d078c8aa a6a202f445e0 7a3aedb23394 d50c5f845ad3 0507a4e06004 c96df9881b14 c2757173c670 e8d026a0e299 85ee0c1cd8af 3f224b59d6f5 76513ae37232
	I1101 16:47:24.272959   18448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 16:47:24.283581   18448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:47:24.291585   18448 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  1 23:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  1 23:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Nov  1 23:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  1 23:46 /etc/kubernetes/scheduler.conf
	
	I1101 16:47:24.291649   18448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 16:47:24.299080   18448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 16:47:24.306402   18448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 16:47:24.313623   18448 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:24.313686   18448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 16:47:24.321927   18448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 16:47:24.329641   18448 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:47:24.329708   18448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 16:47:24.338328   18448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 16:47:24.347263   18448 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 16:47:24.347280   18448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:47:24.399520   18448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:47:24.943506   18448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:47:25.072260   18448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:47:25.130478   18448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:47:25.196206   18448 api_server.go:51] waiting for apiserver process to appear ...
	I1101 16:47:25.196277   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:25.746390   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:26.830871   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061132216s)
	I1101 16:47:29.333235   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:29.446224   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:29.482027   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.482041   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:29.482127   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:29.506070   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.506083   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:29.506169   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:29.530095   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.530109   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:29.530205   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:29.556851   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.556864   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:29.556954   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:29.581914   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.581930   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:29.582030   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:29.611188   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.611210   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:29.611307   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:29.639823   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.639841   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:29.639941   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:29.691297   17846 logs.go:274] 0 containers: []
	W1101 16:47:29.691315   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:29.691326   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:29.691337   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:29.736057   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:29.736076   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:29.751713   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:29.751734   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:29.826348   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:29.826361   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:29.826368   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:29.842465   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:29.842477   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:26.246583   18448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:26.262333   18448 api_server.go:71] duration metric: took 1.066138937s to wait for apiserver process to appear ...
	I1101 16:47:26.262351   18448 api_server.go:87] waiting for apiserver healthz status ...
	I1101 16:47:26.262364   18448 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54144/healthz ...
	I1101 16:47:26.263762   18448 api_server.go:268] stopped: https://127.0.0.1:54144/healthz: Get "https://127.0.0.1:54144/healthz": EOF
	I1101 16:47:26.765816   18448 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54144/healthz ...
	I1101 16:47:29.654863   18448 api_server.go:278] https://127.0.0.1:54144/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 16:47:29.654885   18448 api_server.go:102] status: https://127.0.0.1:54144/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 16:47:29.763846   18448 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54144/healthz ...
	I1101 16:47:29.775202   18448 api_server.go:278] https://127.0.0.1:54144/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 16:47:29.775228   18448 api_server.go:102] status: https://127.0.0.1:54144/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:47:30.265890   18448 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54144/healthz ...
	I1101 16:47:30.273374   18448 api_server.go:278] https://127.0.0.1:54144/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 16:47:30.273392   18448 api_server.go:102] status: https://127.0.0.1:54144/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:47:30.763788   18448 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54144/healthz ...
	I1101 16:47:30.770554   18448 api_server.go:278] https://127.0.0.1:54144/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 16:47:30.770572   18448 api_server.go:102] status: https://127.0.0.1:54144/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 16:47:31.900987   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058517795s)
	I1101 16:47:34.401536   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:34.445880   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:34.472071   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.472083   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:34.472165   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:34.493215   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.493226   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:34.493308   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:34.515884   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.515896   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:34.515984   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:34.538600   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.538612   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:34.538693   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:34.561138   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.561150   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:34.561230   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:34.584730   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.584743   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:34.584825   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:34.606367   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.606379   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:34.606459   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:34.629424   17846 logs.go:274] 0 containers: []
	W1101 16:47:34.629437   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:34.629444   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:34.629451   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:34.641194   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:34.641209   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:34.696267   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:34.696293   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:34.696300   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:34.710093   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:34.710106   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:31.264112   18448 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54144/healthz ...
	I1101 16:47:31.270114   18448 api_server.go:278] https://127.0.0.1:54144/healthz returned 200:
	ok
	I1101 16:47:31.280333   18448 api_server.go:140] control plane version: v1.25.3
	I1101 16:47:31.280346   18448 api_server.go:130] duration metric: took 5.018041s to wait for apiserver health ...
	I1101 16:47:31.280354   18448 cni.go:95] Creating CNI manager for ""
	I1101 16:47:31.280361   18448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 16:47:31.280374   18448 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 16:47:31.288119   18448 system_pods.go:59] 8 kube-system pods found
	I1101 16:47:31.288135   18448 system_pods.go:61] "coredns-565d847f94-6d27z" [ae72db7e-ee7a-4615-9f60-6eacf1b71e5b] Running
	I1101 16:47:31.288142   18448 system_pods.go:61] "etcd-embed-certs-164600" [35d08bed-e0c3-4300-bb5e-3d1b2b2bc933] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 16:47:31.288146   18448 system_pods.go:61] "kube-apiserver-embed-certs-164600" [676bd324-3dd6-4226-a139-9100cb0aa01f] Running
	I1101 16:47:31.288150   18448 system_pods.go:61] "kube-controller-manager-embed-certs-164600" [8709eee2-3506-4961-a5a2-4730d7d0fb07] Running
	I1101 16:47:31.288153   18448 system_pods.go:61] "kube-proxy-hfp59" [359b5d09-3b72-45a9-a1f3-862450b32dd4] Running
	I1101 16:47:31.288157   18448 system_pods.go:61] "kube-scheduler-embed-certs-164600" [04d466d4-9a93-4b0b-828d-3202864a13e2] Running
	I1101 16:47:31.288162   18448 system_pods.go:61] "metrics-server-5c8fd5cf8-6vrkx" [3e9c1e3b-1b7b-4ffc-8b8d-d3c4bc66bf0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 16:47:31.288167   18448 system_pods.go:61] "storage-provisioner" [59388b50-6b8e-4df1-b0ad-be7f94429b02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 16:47:31.288172   18448 system_pods.go:74] duration metric: took 7.792591ms to wait for pod list to return data ...
	I1101 16:47:31.288178   18448 node_conditions.go:102] verifying NodePressure condition ...
	I1101 16:47:31.291444   18448 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I1101 16:47:31.291462   18448 node_conditions.go:123] node cpu capacity is 6
	I1101 16:47:31.291475   18448 node_conditions.go:105] duration metric: took 3.293787ms to run NodePressure ...
	I1101 16:47:31.291486   18448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 16:47:31.546332   18448 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1101 16:47:31.551728   18448 kubeadm.go:778] kubelet initialised
	I1101 16:47:31.551740   18448 kubeadm.go:779] duration metric: took 5.39139ms waiting for restarted kubelet to initialise ...
	I1101 16:47:31.551747   18448 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 16:47:31.559364   18448 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-6d27z" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:31.567444   18448 pod_ready.go:92] pod "coredns-565d847f94-6d27z" in "kube-system" namespace has status "Ready":"True"
	I1101 16:47:31.567454   18448 pod_ready.go:81] duration metric: took 8.077219ms waiting for pod "coredns-565d847f94-6d27z" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:31.567461   18448 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-164600" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:33.582944   18448 pod_ready.go:102] pod "etcd-embed-certs-164600" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:36.082608   18448 pod_ready.go:102] pod "etcd-embed-certs-164600" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:36.755469   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045370905s)
	I1101 16:47:36.755589   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:36.755597   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:39.294279   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:39.447242   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:39.470574   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.470587   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:39.470667   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:39.493942   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.493955   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:39.494040   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:39.516633   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.516645   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:39.516727   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:39.538567   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.538580   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:39.538662   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:39.560384   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.560397   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:39.560479   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:39.584749   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.584761   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:39.584842   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:39.607441   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.607452   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:39.607534   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:39.629623   17846 logs.go:274] 0 containers: []
	W1101 16:47:39.629636   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:39.629643   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:39.629649   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:39.671563   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:39.671577   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:39.683864   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:39.683877   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:39.738956   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:39.738966   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:39.738974   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:39.753740   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:39.753755   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:38.579134   18448 pod_ready.go:102] pod "etcd-embed-certs-164600" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:40.581750   18448 pod_ready.go:102] pod "etcd-embed-certs-164600" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:41.801816   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048067899s)
	I1101 16:47:44.302583   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:44.445799   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:47:44.472158   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.472169   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:47:44.472253   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:47:44.495116   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.495128   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:47:44.495210   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:47:44.519031   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.519044   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:47:44.519124   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:47:44.542423   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.542436   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:47:44.542522   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:47:44.566200   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.566216   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:47:44.566304   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:47:44.593754   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.593766   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:47:44.593849   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:47:44.619413   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.619441   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:47:44.619567   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:47:44.642879   17846 logs.go:274] 0 containers: []
	W1101 16:47:44.642894   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:47:44.642903   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:47:44.642914   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:47:44.658068   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:47:44.658083   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:47:44.716628   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:47:44.716646   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:47:44.716653   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:47:44.731430   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:47:44.731442   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:47:41.579229   18448 pod_ready.go:92] pod "etcd-embed-certs-164600" in "kube-system" namespace has status "Ready":"True"
	I1101 16:47:41.579243   18448 pod_ready.go:81] duration metric: took 10.011878103s waiting for pod "etcd-embed-certs-164600" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:41.579254   18448 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-164600" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:41.584285   18448 pod_ready.go:92] pod "kube-apiserver-embed-certs-164600" in "kube-system" namespace has status "Ready":"True"
	I1101 16:47:41.584294   18448 pod_ready.go:81] duration metric: took 5.033752ms waiting for pod "kube-apiserver-embed-certs-164600" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:41.584300   18448 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-164600" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:41.589560   18448 pod_ready.go:92] pod "kube-controller-manager-embed-certs-164600" in "kube-system" namespace has status "Ready":"True"
	I1101 16:47:41.589569   18448 pod_ready.go:81] duration metric: took 5.263293ms waiting for pod "kube-controller-manager-embed-certs-164600" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:41.589580   18448 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hfp59" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:41.594232   18448 pod_ready.go:92] pod "kube-proxy-hfp59" in "kube-system" namespace has status "Ready":"True"
	I1101 16:47:41.594241   18448 pod_ready.go:81] duration metric: took 4.65547ms waiting for pod "kube-proxy-hfp59" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:41.594247   18448 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-164600" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:43.605300   18448 pod_ready.go:102] pod "kube-scheduler-embed-certs-164600" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:45.106748   18448 pod_ready.go:92] pod "kube-scheduler-embed-certs-164600" in "kube-system" namespace has status "Ready":"True"
	I1101 16:47:45.106761   18448 pod_ready.go:81] duration metric: took 3.512545082s waiting for pod "kube-scheduler-embed-certs-164600" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:45.106768   18448 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace to be "Ready" ...
	I1101 16:47:46.779191   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047757292s)
	I1101 16:47:46.779300   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:47:46.779308   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:47:49.319955   17846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:47:49.445640   17846 kubeadm.go:631] restartCluster took 4m4.183481018s
	W1101 16:47:49.445791   17846 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I1101 16:47:49.445813   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1101 16:47:49.871351   17846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 16:47:49.880900   17846 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 16:47:49.889036   17846 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 16:47:49.889102   17846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:47:49.896653   17846 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 16:47:49.896679   17846 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 16:47:49.942356   17846 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1101 16:47:49.942402   17846 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 16:47:50.243643   17846 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 16:47:50.243736   17846 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 16:47:50.243834   17846 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 16:47:50.464921   17846 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 16:47:50.466503   17846 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 16:47:50.473447   17846 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1101 16:47:50.541678   17846 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 16:47:50.562555   17846 out.go:204]   - Generating certificates and keys ...
	I1101 16:47:50.562630   17846 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 16:47:50.562721   17846 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 16:47:50.562794   17846 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 16:47:50.562898   17846 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 16:47:50.563021   17846 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 16:47:50.563092   17846 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 16:47:50.563167   17846 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 16:47:50.563245   17846 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 16:47:50.563351   17846 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 16:47:50.563446   17846 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 16:47:50.563482   17846 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 16:47:50.563539   17846 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 16:47:50.640384   17846 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 16:47:50.765850   17846 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 16:47:50.844605   17846 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 16:47:51.150107   17846 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 16:47:51.150651   17846 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 16:47:47.119807   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:49.618876   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:51.172236   17846 out.go:204]   - Booting up control plane ...
	I1101 16:47:51.172365   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 16:47:51.172435   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 16:47:51.172497   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 16:47:51.172558   17846 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 16:47:51.172715   17846 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 16:47:51.620460   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:54.117438   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:56.119852   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:47:58.617110   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:00.620206   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:03.116774   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:05.117077   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:07.119378   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:09.617978   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:12.117870   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:14.619301   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:17.117718   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:19.118464   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:21.618033   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:23.619981   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:26.119518   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:28.616655   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:30.619929   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:31.132519   17846 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 16:48:31.133304   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:48:31.133805   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:48:33.116765   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:35.117638   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:36.130239   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:48:36.130466   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:48:37.119058   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:39.619917   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:42.118946   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:44.617893   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:46.123464   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:48:46.123623   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:48:46.617974   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:48.618949   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:50.620085   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:53.119999   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:55.617682   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:48:58.118503   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:00.619719   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:03.115815   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:05.117549   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:06.110432   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:49:06.110727   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:49:07.119762   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:09.617428   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:11.619581   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:14.119538   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:16.617124   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:19.116729   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:21.117538   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:23.618274   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:26.115974   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:28.117706   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:30.619631   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:33.116666   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:35.118431   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:37.118748   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:39.615854   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:41.618151   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:44.116895   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:46.082375   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:49:46.082630   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:49:46.082648   17846 kubeadm.go:317] 
	I1101 16:49:46.082688   17846 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 16:49:46.082728   17846 kubeadm.go:317] 	timed out waiting for the condition
	I1101 16:49:46.082736   17846 kubeadm.go:317] 
	I1101 16:49:46.082795   17846 kubeadm.go:317] This error is likely caused by:
	I1101 16:49:46.082882   17846 kubeadm.go:317] 	- The kubelet is not running
	I1101 16:49:46.083011   17846 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 16:49:46.083026   17846 kubeadm.go:317] 
	I1101 16:49:46.083139   17846 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 16:49:46.083168   17846 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 16:49:46.083194   17846 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 16:49:46.083198   17846 kubeadm.go:317] 
	I1101 16:49:46.083283   17846 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 16:49:46.083361   17846 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1101 16:49:46.083444   17846 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1101 16:49:46.083495   17846 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1101 16:49:46.083557   17846 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 16:49:46.083584   17846 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1101 16:49:46.085941   17846 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 16:49:46.086055   17846 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1101 16:49:46.086140   17846 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 16:49:46.086212   17846 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 16:49:46.086267   17846 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1101 16:49:46.086408   17846 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1101 16:49:46.086434   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1101 16:49:46.510880   17846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 16:49:46.521500   17846 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 16:49:46.521571   17846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 16:49:46.528942   17846 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 16:49:46.528965   17846 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 16:49:46.575769   17846 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1101 16:49:46.575851   17846 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 16:49:46.870407   17846 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 16:49:46.870491   17846 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 16:49:46.870571   17846 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 16:49:47.092717   17846 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 16:49:47.093492   17846 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 16:49:47.100457   17846 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1101 16:49:47.174081   17846 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 16:49:47.195741   17846 out.go:204]   - Generating certificates and keys ...
	I1101 16:49:47.195829   17846 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 16:49:47.195890   17846 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 16:49:47.195970   17846 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 16:49:47.196041   17846 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 16:49:47.196103   17846 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 16:49:47.196146   17846 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 16:49:47.196202   17846 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 16:49:47.196281   17846 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 16:49:47.196364   17846 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 16:49:47.196420   17846 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 16:49:47.196452   17846 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 16:49:47.196504   17846 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 16:49:47.348838   17846 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 16:49:47.486404   17846 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 16:49:47.568476   17846 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 16:49:47.627774   17846 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 16:49:47.628523   17846 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 16:49:47.670718   17846 out.go:204]   - Booting up control plane ...
	I1101 16:49:47.670811   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 16:49:47.670875   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 16:49:47.670930   17846 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 16:49:47.671079   17846 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 16:49:47.671263   17846 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 16:49:46.616255   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:49.117169   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:51.117251   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:53.121890   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:55.619226   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:49:58.116290   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:00.116906   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:02.117078   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:04.616356   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:06.617325   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:08.619538   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:11.118929   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:13.120674   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:15.617253   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:18.118947   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:20.617292   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:22.618152   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:25.117998   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:27.611062   17846 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 16:50:27.612231   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:50:27.612522   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:50:27.616142   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:29.616684   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:32.608994   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:50:32.609156   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:50:31.617095   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:34.116420   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:36.117195   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:38.618679   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:41.115647   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:42.603145   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:50:42.603381   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:50:43.117746   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:45.618607   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:48.115656   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:50.617933   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:53.118785   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:55.118926   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:50:57.616999   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:00.118248   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:02.590328   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:51:02.590555   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:51:02.618698   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:05.118418   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:07.614413   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:09.617688   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:12.116036   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:14.118396   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:16.616814   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:18.617994   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:21.115491   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:23.119131   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:25.617879   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:28.116238   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:30.117104   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:32.117999   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:34.118120   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:36.617278   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:39.116919   18448 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-6vrkx" in "kube-system" namespace has status "Ready":"False"
	I1101 16:51:42.562423   17846 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 16:51:42.562742   17846 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 16:51:42.562757   17846 kubeadm.go:317] 
	I1101 16:51:42.562810   17846 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 16:51:42.562880   17846 kubeadm.go:317] 	timed out waiting for the condition
	I1101 16:51:42.562887   17846 kubeadm.go:317] 
	I1101 16:51:42.562945   17846 kubeadm.go:317] This error is likely caused by:
	I1101 16:51:42.563004   17846 kubeadm.go:317] 	- The kubelet is not running
	I1101 16:51:42.563135   17846 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 16:51:42.563149   17846 kubeadm.go:317] 
	I1101 16:51:42.563253   17846 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 16:51:42.563286   17846 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 16:51:42.563325   17846 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 16:51:42.563333   17846 kubeadm.go:317] 
	I1101 16:51:42.563431   17846 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 16:51:42.563556   17846 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1101 16:51:42.563639   17846 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1101 16:51:42.563681   17846 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1101 16:51:42.563741   17846 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 16:51:42.563767   17846 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1101 16:51:42.567302   17846 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1101 16:51:42.567408   17846 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1101 16:51:42.567496   17846 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 16:51:42.567572   17846 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 16:51:42.567632   17846 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1101 16:51:42.567668   17846 kubeadm.go:398] StartCluster complete in 7m57.342308605s
	I1101 16:51:42.567763   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1101 16:51:42.590192   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.590204   17846 logs.go:276] No container was found matching "kube-apiserver"
	I1101 16:51:42.590288   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1101 16:51:42.614234   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.614247   17846 logs.go:276] No container was found matching "etcd"
	I1101 16:51:42.614333   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1101 16:51:42.637462   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.637474   17846 logs.go:276] No container was found matching "coredns"
	I1101 16:51:42.637556   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1101 16:51:42.659360   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.659370   17846 logs.go:276] No container was found matching "kube-scheduler"
	I1101 16:51:42.659453   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1101 16:51:42.682378   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.682393   17846 logs.go:276] No container was found matching "kube-proxy"
	I1101 16:51:42.682478   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1101 16:51:42.705215   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.705227   17846 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 16:51:42.705312   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1101 16:51:42.727584   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.727596   17846 logs.go:276] No container was found matching "storage-provisioner"
	I1101 16:51:42.727677   17846 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1101 16:51:42.750243   17846 logs.go:274] 0 containers: []
	W1101 16:51:42.750254   17846 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 16:51:42.750262   17846 logs.go:123] Gathering logs for kubelet ...
	I1101 16:51:42.750269   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 16:51:42.788804   17846 logs.go:123] Gathering logs for dmesg ...
	I1101 16:51:42.788818   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 16:51:42.801590   17846 logs.go:123] Gathering logs for describe nodes ...
	I1101 16:51:42.801603   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 16:51:42.855623   17846 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 16:51:42.855634   17846 logs.go:123] Gathering logs for Docker ...
	I1101 16:51:42.855640   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1101 16:51:42.869204   17846 logs.go:123] Gathering logs for container status ...
	I1101 16:51:42.869216   17846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 16:51:44.921375   17846 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052166745s)
	W1101 16:51:44.921496   17846 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1101 16:51:44.921510   17846 out.go:239] * 
	W1101 16:51:44.921638   17846 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 16:51:44.921655   17846 out.go:239] * 
	W1101 16:51:44.922287   17846 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 16:51:44.988165   17846 out.go:177] 
	W1101 16:51:45.032349   17846 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 16:51:45.032491   17846 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1101 16:51:45.032597   17846 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1101 16:51:45.075179   17846 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-11-01 23:43:42 UTC, end at Tue 2022-11-01 23:51:46 UTC. --
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Stopping Docker Application Container Engine...
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[133]: time="2022-11-01T23:43:44.286667976Z" level=info msg="Processing signal 'terminated'"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[133]: time="2022-11-01T23:43:44.287593863Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[133]: time="2022-11-01T23:43:44.288112899Z" level=info msg="Daemon shutdown complete"
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: docker.service: Succeeded.
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Stopped Docker Application Container Engine.
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Starting Docker Application Container Engine...
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.329553637Z" level=info msg="Starting up"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331286692Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331318371Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331337221Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331345474Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332301530Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332334687Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332347311Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332353687Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.336339693Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.342180285Z" level=info msg="Loading containers: start."
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.419278363Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.449439673Z" level=info msg="Loading containers: done."
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.456899068Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.456954410Z" level=info msg="Daemon has completed initialization"
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Started Docker Application Container Engine.
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.478922547Z" level=info msg="API listen on [::]:2376"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.484610172Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-11-01T23:51:48Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:51:49 up  1:21,  0 users,  load average: 0.78, 0.94, 1.00
	Linux old-k8s-version-163757 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-11-01 23:43:42 UTC, end at Tue 2022-11-01 23:51:49 UTC. --
	Nov 01 23:51:47 old-k8s-version-163757 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 01 23:51:48 old-k8s-version-163757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Nov 01 23:51:48 old-k8s-version-163757 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 01 23:51:48 old-k8s-version-163757 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 01 23:51:48 old-k8s-version-163757 kubelet[14413]: I1101 23:51:48.484939   14413 server.go:410] Version: v1.16.0
	Nov 01 23:51:48 old-k8s-version-163757 kubelet[14413]: I1101 23:51:48.485328   14413 plugins.go:100] No cloud provider specified.
	Nov 01 23:51:48 old-k8s-version-163757 kubelet[14413]: I1101 23:51:48.485408   14413 server.go:773] Client rotation is on, will bootstrap in background
	Nov 01 23:51:48 old-k8s-version-163757 kubelet[14413]: I1101 23:51:48.487235   14413 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 01 23:51:48 old-k8s-version-163757 kubelet[14413]: W1101 23:51:48.488115   14413 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 01 23:51:48 old-k8s-version-163757 kubelet[14413]: W1101 23:51:48.488214   14413 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 01 23:51:48 old-k8s-version-163757 kubelet[14413]: F1101 23:51:48.488503   14413 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 01 23:51:48 old-k8s-version-163757 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 01 23:51:48 old-k8s-version-163757 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 01 23:51:49 old-k8s-version-163757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Nov 01 23:51:49 old-k8s-version-163757 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 01 23:51:49 old-k8s-version-163757 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 01 23:51:49 old-k8s-version-163757 kubelet[14448]: I1101 23:51:49.224064   14448 server.go:410] Version: v1.16.0
	Nov 01 23:51:49 old-k8s-version-163757 kubelet[14448]: I1101 23:51:49.224346   14448 plugins.go:100] No cloud provider specified.
	Nov 01 23:51:49 old-k8s-version-163757 kubelet[14448]: I1101 23:51:49.224356   14448 server.go:773] Client rotation is on, will bootstrap in background
	Nov 01 23:51:49 old-k8s-version-163757 kubelet[14448]: I1101 23:51:49.226683   14448 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 01 23:51:49 old-k8s-version-163757 kubelet[14448]: W1101 23:51:49.227650   14448 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 01 23:51:49 old-k8s-version-163757 kubelet[14448]: W1101 23:51:49.227752   14448 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 01 23:51:49 old-k8s-version-163757 kubelet[14448]: F1101 23:51:49.227801   14448 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 01 23:51:49 old-k8s-version-163757 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 01 23:51:49 old-k8s-version-163757 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:51:49.033074   18789 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757
E1101 16:51:49.732971    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 2 (440.128279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-163757" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (489.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:52:19.502775    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:52:35.705598    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:52:44.333077    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:53:11.382426    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:53:32.039034    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:54:17.427999    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:54:41.616444    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:55:02.303303    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:55:30.011738    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:55:48.865555    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:56:04.662290    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:56:18.498303    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:56:35.094851    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:56:48.336763    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:56:49.729959    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:57:11.914097    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:57:19.506587    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:57:35.726949    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:57:44.359739    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 16:57:49.854246    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:57:54.408870    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:58:12.814344    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:58:32.071045    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:58:42.583622    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:58:58.784925    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 16:59:41.645499    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:00:02.331881    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:00:48.896217    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:01:18.530937    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 2 (398.921075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-163757" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-163757
helpers_test.go:235: (dbg) docker inspect old-k8s-version-163757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e",
	        "Created": "2022-11-01T23:38:04.256272958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274043,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:43:41.854152852Z",
	            "FinishedAt": "2022-11-01T23:43:38.949849093Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hostname",
	        "HostsPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hosts",
	        "LogPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e-json.log",
	        "Name": "/old-k8s-version-163757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-163757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163757",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a78b66e11436bdcef5ae4e878d76bd762a44be207b062530209a62e8ac180eb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53982"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53983"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53984"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53981"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6a78b66e1143",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-163757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "68479d844c03",
	                        "old-k8s-version-163757"
	                    ],
	                    "NetworkID": "de11f6b0d4a3e9909764ae953f0f910d0d29438f96300416f12a7f896caa0f32",
	                    "EndpointID": "d35627681b46bada56d185972cf0b735b505b074234eaf79ad5bd6396bcc6bec",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 2 (394.368661ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-163757 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-163757 logs -n 25: (3.448025298s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-164600                                      | embed-certs-164600           | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-164600                                      | embed-certs-164600           | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-164600                                      | embed-certs-164600           | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	| delete  | -p embed-certs-164600                                      | embed-certs-164600           | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	| delete  | -p                                                         | disable-driver-mounts-165249 | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	|         | disable-driver-mounts-165249                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:53 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:53 PDT | 01 Nov 22 16:53 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:53 PDT | 01 Nov 22 16:53 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-165249           | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:53 PDT | 01 Nov 22 16:53 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:53 PDT | 01 Nov 22 16:58 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	| start   | -p newest-cni-165923 --memory=2200 --alsologtostderr       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 17:00 PDT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-165923                 | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-165923                      | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-165923 --memory=2200 --alsologtostderr       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-165923 sudo                                  | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	| delete  | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 17:00:20
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 17:00:20.388122   20960 out.go:296] Setting OutFile to fd 1 ...
	I1101 17:00:20.388311   20960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 17:00:20.388317   20960 out.go:309] Setting ErrFile to fd 2...
	I1101 17:00:20.388322   20960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 17:00:20.388444   20960 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 17:00:20.389013   20960 out.go:303] Setting JSON to false
	I1101 17:00:20.407630   20960 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5395,"bootTime":1667341825,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 17:00:20.407763   20960 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 17:00:20.429679   20960 out.go:177] * [newest-cni-165923] minikube v1.27.1 on Darwin 13.0
	I1101 17:00:20.451290   20960 notify.go:220] Checking for updates...
	I1101 17:00:20.473429   20960 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 17:00:20.495391   20960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 17:00:20.517201   20960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 17:00:20.538655   20960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 17:00:20.581201   20960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 17:00:20.604975   20960 config.go:180] Loaded profile config "newest-cni-165923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 17:00:20.605638   20960 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 17:00:20.668062   20960 docker.go:137] docker version: linux-20.10.20
	I1101 17:00:20.668217   20960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 17:00:20.809840   20960 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-02 00:00:20.740340094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 17:00:20.853179   20960 out.go:177] * Using the docker driver based on existing profile
	I1101 17:00:20.874668   20960 start.go:282] selected driver: docker
	I1101 17:00:20.874698   20960 start.go:808] validating driver "docker" against &{Name:newest-cni-165923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-165923 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 17:00:20.874861   20960 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 17:00:20.878753   20960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 17:00:21.019419   20960 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-02 00:00:20.95094078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 17:00:21.019596   20960 start_flags.go:907] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 17:00:21.019617   20960 cni.go:95] Creating CNI manager for ""
	I1101 17:00:21.019626   20960 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 17:00:21.019640   20960 start_flags.go:317] config:
	{Name:newest-cni-165923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-165923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 17:00:21.061974   20960 out.go:177] * Starting control plane node newest-cni-165923 in cluster newest-cni-165923
	I1101 17:00:21.083144   20960 cache.go:120] Beginning downloading kic base image for docker with docker
	I1101 17:00:21.104232   20960 out.go:177] * Pulling base image ...
	I1101 17:00:21.148245   20960 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1101 17:00:21.148276   20960 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 17:00:21.148347   20960 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1101 17:00:21.148370   20960 cache.go:57] Caching tarball of preloaded images
	I1101 17:00:21.149303   20960 preload.go:174] Found /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 17:00:21.149422   20960 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1101 17:00:21.149865   20960 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/config.json ...
	I1101 17:00:21.205136   20960 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 17:00:21.205153   20960 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 17:00:21.205162   20960 cache.go:208] Successfully downloaded all kic artifacts
	I1101 17:00:21.205215   20960 start.go:364] acquiring machines lock for newest-cni-165923: {Name:mkc0aae0e96bf69787cefd62e998860d82986621 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 17:00:21.205301   20960 start.go:368] acquired machines lock for "newest-cni-165923" in 65.397µs
	I1101 17:00:21.205335   20960 start.go:96] Skipping create...Using existing machine configuration
	I1101 17:00:21.205346   20960 fix.go:55] fixHost starting: 
	I1101 17:00:21.205631   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:21.262841   20960 fix.go:103] recreateIfNeeded on newest-cni-165923: state=Stopped err=<nil>
	W1101 17:00:21.262877   20960 fix.go:129] unexpected machine state, will restart: <nil>
	I1101 17:00:21.306194   20960 out.go:177] * Restarting existing docker container for "newest-cni-165923" ...
	I1101 17:00:21.327814   20960 cli_runner.go:164] Run: docker start newest-cni-165923
	I1101 17:00:21.658055   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:21.719727   20960 kic.go:415] container "newest-cni-165923" state is running.
	I1101 17:00:21.720399   20960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-165923
	I1101 17:00:21.785189   20960 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/config.json ...
	I1101 17:00:21.785878   20960 machine.go:88] provisioning docker machine ...
	I1101 17:00:21.785909   20960 ubuntu.go:169] provisioning hostname "newest-cni-165923"
	I1101 17:00:21.786006   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:21.862346   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:21.862650   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:21.862700   20960 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-165923 && echo "newest-cni-165923" | sudo tee /etc/hostname
	I1101 17:00:21.998520   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-165923
	
	I1101 17:00:21.998620   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:22.059881   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:22.060049   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:22.060062   20960 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-165923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-165923/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-165923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 17:00:22.177190   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 17:00:22.177208   20960 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15232-2108/.minikube CaCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15232-2108/.minikube}
	I1101 17:00:22.177238   20960 ubuntu.go:177] setting up certificates
	I1101 17:00:22.177247   20960 provision.go:83] configureAuth start
	I1101 17:00:22.177342   20960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-165923
	I1101 17:00:22.240837   20960 provision.go:138] copyHostCerts
	I1101 17:00:22.240953   20960 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem, removing ...
	I1101 17:00:22.240963   20960 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
	I1101 17:00:22.241062   20960 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem (1082 bytes)
	I1101 17:00:22.241283   20960 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem, removing ...
	I1101 17:00:22.241292   20960 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
	I1101 17:00:22.241356   20960 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem (1123 bytes)
	I1101 17:00:22.241519   20960 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem, removing ...
	I1101 17:00:22.241525   20960 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
	I1101 17:00:22.241585   20960 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem (1675 bytes)
	I1101 17:00:22.241731   20960 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem org=jenkins.newest-cni-165923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-165923]
	I1101 17:00:22.355775   20960 provision.go:172] copyRemoteCerts
	I1101 17:00:22.355853   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 17:00:22.355928   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:22.418848   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:22.505987   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 17:00:22.524841   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 17:00:22.544114   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 17:00:22.565533   20960 provision.go:86] duration metric: configureAuth took 388.27009ms
	I1101 17:00:22.565549   20960 ubuntu.go:193] setting minikube options for container-runtime
	I1101 17:00:22.565746   20960 config.go:180] Loaded profile config "newest-cni-165923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 17:00:22.565833   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:22.632727   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:22.632897   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:22.632906   20960 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 17:00:22.751500   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1101 17:00:22.751512   20960 ubuntu.go:71] root file system type: overlay
	I1101 17:00:22.751640   20960 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 17:00:22.751748   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:22.809637   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:22.809791   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:22.809844   20960 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 17:00:22.937679   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 17:00:22.937809   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.000892   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:23.001049   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:23.001062   20960 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 17:00:23.122607   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 17:00:23.122625   20960 machine.go:91] provisioned docker machine in 1.336750573s
	I1101 17:00:23.122636   20960 start.go:300] post-start starting for "newest-cni-165923" (driver="docker")
	I1101 17:00:23.122641   20960 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 17:00:23.122732   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 17:00:23.122797   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.181387   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:23.268392   20960 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 17:00:23.271666   20960 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 17:00:23.271682   20960 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 17:00:23.271688   20960 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 17:00:23.271693   20960 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 17:00:23.271701   20960 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/addons for local assets ...
	I1101 17:00:23.271791   20960 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/files for local assets ...
	I1101 17:00:23.271958   20960 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> 34132.pem in /etc/ssl/certs
	I1101 17:00:23.272135   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 17:00:23.279120   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /etc/ssl/certs/34132.pem (1708 bytes)
	I1101 17:00:23.296995   20960 start.go:303] post-start completed in 174.35189ms
	I1101 17:00:23.297082   20960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 17:00:23.297150   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.359937   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:23.446667   20960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 17:00:23.451319   20960 fix.go:57] fixHost completed within 2.24599353s
	I1101 17:00:23.451330   20960 start.go:83] releasing machines lock for "newest-cni-165923", held for 2.246044356s
	I1101 17:00:23.451419   20960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-165923
	I1101 17:00:23.511174   20960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 17:00:23.511209   20960 ssh_runner.go:195] Run: systemctl --version
	I1101 17:00:23.511268   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.511279   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.575633   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:23.576195   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:23.661055   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 17:00:23.722646   20960 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1101 17:00:23.735601   20960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 17:00:23.805172   20960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1101 17:00:23.881625   20960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 17:00:23.892299   20960 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1101 17:00:23.892375   20960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 17:00:23.902503   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 17:00:23.916375   20960 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 17:00:23.984133   20960 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 17:00:24.042903   20960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 17:00:24.118282   20960 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 17:00:24.345870   20960 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 17:00:24.420017   20960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 17:00:24.491696   20960 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1101 17:00:24.502155   20960 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 17:00:24.502239   20960 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 17:00:24.506138   20960 start.go:472] Will wait 60s for crictl version
	I1101 17:00:24.506182   20960 ssh_runner.go:195] Run: sudo crictl version
	I1101 17:00:24.535836   20960 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1101 17:00:24.535928   20960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 17:00:24.563544   20960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 17:00:24.616098   20960 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1101 17:00:24.616243   20960 cli_runner.go:164] Run: docker exec -t newest-cni-165923 dig +short host.docker.internal
	I1101 17:00:24.730007   20960 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1101 17:00:24.730128   20960 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1101 17:00:24.734602   20960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 17:00:24.744543   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:24.825157   20960 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1101 17:00:24.846808   20960 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1101 17:00:24.846917   20960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 17:00:24.871420   20960 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1101 17:00:24.871439   20960 docker.go:543] Images already preloaded, skipping extraction
	I1101 17:00:24.871535   20960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 17:00:24.897183   20960 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1101 17:00:24.897203   20960 cache_images.go:84] Images are preloaded, skipping loading
	I1101 17:00:24.897335   20960 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 17:00:24.968938   20960 cni.go:95] Creating CNI manager for ""
	I1101 17:00:24.968954   20960 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 17:00:24.968969   20960 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1101 17:00:24.968980   20960 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-165923 NodeName:newest-cni-165923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArg
s:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 17:00:24.969095   20960 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-165923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 17:00:24.969175   20960 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-165923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-165923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 17:00:24.969267   20960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1101 17:00:24.977648   20960 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 17:00:24.977710   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 17:00:24.985705   20960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I1101 17:00:24.999502   20960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 17:00:25.013824   20960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1101 17:00:25.028100   20960 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1101 17:00:25.032024   20960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 17:00:25.041948   20960 certs.go:54] Setting up /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923 for IP: 192.168.67.2
	I1101 17:00:25.042078   20960 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key
	I1101 17:00:25.042204   20960 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key
	I1101 17:00:25.042373   20960 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/client.key
	I1101 17:00:25.042483   20960 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/apiserver.key.c7fa3a9e
	I1101 17:00:25.042548   20960 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/proxy-client.key
	I1101 17:00:25.042876   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem (1338 bytes)
	W1101 17:00:25.042921   20960 certs.go:384] ignoring /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413_empty.pem, impossibly tiny 0 bytes
	I1101 17:00:25.042933   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 17:00:25.042974   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem (1082 bytes)
	I1101 17:00:25.043015   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem (1123 bytes)
	I1101 17:00:25.043058   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem (1675 bytes)
	I1101 17:00:25.043145   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem (1708 bytes)
	I1101 17:00:25.043775   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 17:00:25.063851   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 17:00:25.083046   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 17:00:25.102623   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 17:00:25.121193   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 17:00:25.140789   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 17:00:25.161072   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 17:00:25.180224   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 17:00:25.199382   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 17:00:25.219039   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem --> /usr/share/ca-certificates/3413.pem (1338 bytes)
	I1101 17:00:25.238154   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /usr/share/ca-certificates/34132.pem (1708 bytes)
	I1101 17:00:25.255476   20960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 17:00:25.267914   20960 ssh_runner.go:195] Run: openssl version
	I1101 17:00:25.273733   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34132.pem && ln -fs /usr/share/ca-certificates/34132.pem /etc/ssl/certs/34132.pem"
	I1101 17:00:25.281611   20960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34132.pem
	I1101 17:00:25.285781   20960 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:49 /usr/share/ca-certificates/34132.pem
	I1101 17:00:25.285834   20960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34132.pem
	I1101 17:00:25.291247   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34132.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 17:00:25.298552   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 17:00:25.306278   20960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 17:00:25.310497   20960 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 17:00:25.310543   20960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 17:00:25.315692   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 17:00:25.322849   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3413.pem && ln -fs /usr/share/ca-certificates/3413.pem /etc/ssl/certs/3413.pem"
	I1101 17:00:25.330360   20960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3413.pem
	I1101 17:00:25.334190   20960 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:49 /usr/share/ca-certificates/3413.pem
	I1101 17:00:25.334257   20960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3413.pem
	I1101 17:00:25.339462   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3413.pem /etc/ssl/certs/51391683.0"
	I1101 17:00:25.346804   20960 kubeadm.go:396] StartCluster: {Name:newest-cni-165923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-165923 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 17:00:25.346924   20960 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 17:00:25.370462   20960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 17:00:25.378432   20960 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1101 17:00:25.378446   20960 kubeadm.go:627] restartCluster start
	I1101 17:00:25.378502   20960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 17:00:25.385251   20960 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:25.385332   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:25.447732   20960 kubeconfig.go:135] verify returned: extract IP: "newest-cni-165923" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 17:00:25.447908   20960 kubeconfig.go:146] "newest-cni-165923" context is missing from /Users/jenkins/minikube-integration/15232-2108/kubeconfig - will repair!
	I1101 17:00:25.448257   20960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/kubeconfig: {Name:mka869f80d5e962d9ffa24675c3f5e3e0593fcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 17:00:25.449550   20960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 17:00:25.457571   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:25.457637   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:25.465940   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:25.667187   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:25.667365   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:25.678269   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:25.868056   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:25.868241   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:25.878717   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.067407   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.067559   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.078346   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.266103   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.266248   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.277213   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.467823   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.468101   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.478520   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.668066   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.668248   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.679669   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.868036   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.868215   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.879039   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.066098   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.066241   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.075351   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.266872   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.267011   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.277765   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.466967   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.467168   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.477750   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.668055   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.668276   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.678936   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.866427   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.866595   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.876854   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.067336   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:28.067476   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:28.078303   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.266193   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:28.266387   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:28.276883   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.467734   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:28.467866   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:28.478214   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.478223   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:28.478280   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:28.486116   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.486133   20960 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1101 17:00:28.486140   20960 kubeadm.go:1114] stopping kube-system containers ...
	I1101 17:00:28.486229   20960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 17:00:28.510374   20960 docker.go:444] Stopping containers: [83d5a0fae290 c5474d9301c0 a88748f79477 9e3fbb234296 a13c76f0b959 b1c7f0a2d66c f82a1683cea1 af5efabcae2e 9fc33b9b9edc 9f4a0258f00c 26996445c6d0 68a15afdb4a0 9a6b5025122c 786dec75c2c4 a7e0034e24e2 c7c424e35d97 ce88b7d87dc1]
	I1101 17:00:28.510482   20960 ssh_runner.go:195] Run: docker stop 83d5a0fae290 c5474d9301c0 a88748f79477 9e3fbb234296 a13c76f0b959 b1c7f0a2d66c f82a1683cea1 af5efabcae2e 9fc33b9b9edc 9f4a0258f00c 26996445c6d0 68a15afdb4a0 9a6b5025122c 786dec75c2c4 a7e0034e24e2 c7c424e35d97 ce88b7d87dc1
	I1101 17:00:28.534513   20960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 17:00:28.544548   20960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 17:00:28.552204   20960 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  1 23:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  1 23:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  1 23:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  1 23:59 /etc/kubernetes/scheduler.conf
	
	I1101 17:00:28.552278   20960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 17:00:28.560657   20960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 17:00:28.569191   20960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 17:00:28.577684   20960 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.577804   20960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 17:00:28.585701   20960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 17:00:28.595185   20960 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.595262   20960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 17:00:28.603363   20960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 17:00:28.611281   20960 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 17:00:28.611317   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:28.661432   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:29.606537   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:29.736012   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:29.791708   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:29.904878   20960 api_server.go:51] waiting for apiserver process to appear ...
	I1101 17:00:29.904965   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 17:00:30.418888   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 17:00:30.919012   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 17:00:30.933667   20960 api_server.go:71] duration metric: took 1.028799891s to wait for apiserver process to appear ...
	I1101 17:00:30.933684   20960 api_server.go:87] waiting for apiserver healthz status ...
	I1101 17:00:30.933697   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:30.935291   20960 api_server.go:268] stopped: https://127.0.0.1:54985/healthz: Get "https://127.0.0.1:54985/healthz": EOF
	I1101 17:00:31.437226   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:34.216539   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 17:00:34.216565   20960 api_server.go:102] status: https://127.0.0.1:54985/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 17:00:34.436926   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:34.444116   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 17:00:34.444137   20960 api_server.go:102] status: https://127.0.0.1:54985/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 17:00:34.935843   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:34.944155   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 17:00:34.944180   20960 api_server.go:102] status: https://127.0.0.1:54985/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 17:00:35.436644   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:35.443427   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 200:
	ok
	I1101 17:00:35.452114   20960 api_server.go:140] control plane version: v1.25.3
	I1101 17:00:35.452131   20960 api_server.go:130] duration metric: took 4.518484953s to wait for apiserver health ...
	I1101 17:00:35.452139   20960 cni.go:95] Creating CNI manager for ""
	I1101 17:00:35.452145   20960 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 17:00:35.452159   20960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 17:00:35.460397   20960 system_pods.go:59] 8 kube-system pods found
	I1101 17:00:35.460417   20960 system_pods.go:61] "coredns-565d847f94-xcxg8" [717409c1-c510-4c65-9a11-56dbb7b6f749] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 17:00:35.460423   20960 system_pods.go:61] "etcd-newest-cni-165923" [190497c4-bc15-495e-8ced-6dbeacfee88b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 17:00:35.460428   20960 system_pods.go:61] "kube-apiserver-newest-cni-165923" [0a46fb50-8536-4e16-8eb0-d176e15dd0f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 17:00:35.460433   20960 system_pods.go:61] "kube-controller-manager-newest-cni-165923" [cbe63ec4-23dd-48b0-85ca-640e4c8e39ca] Running
	I1101 17:00:35.460436   20960 system_pods.go:61] "kube-proxy-sc8lm" [97fb7e69-3c2b-44b7-bd18-97fd44f40b3d] Running
	I1101 17:00:35.460441   20960 system_pods.go:61] "kube-scheduler-newest-cni-165923" [ffecae28-01fb-4023-8426-f9d563720fe9] Running
	I1101 17:00:35.460447   20960 system_pods.go:61] "metrics-server-5c8fd5cf8-d8wg4" [113d513e-e7fd-414b-ab71-82518cb0ff93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 17:00:35.460452   20960 system_pods.go:61] "storage-provisioner" [b3e0f7b1-84e4-4769-b4f7-f4d8e72f9f88] Running
	I1101 17:00:35.460457   20960 system_pods.go:74] duration metric: took 8.292718ms to wait for pod list to return data ...
	I1101 17:00:35.460463   20960 node_conditions.go:102] verifying NodePressure condition ...
	I1101 17:00:35.464649   20960 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I1101 17:00:35.464666   20960 node_conditions.go:123] node cpu capacity is 6
	I1101 17:00:35.464676   20960 node_conditions.go:105] duration metric: took 4.209939ms to run NodePressure ...
	I1101 17:00:35.464695   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:35.650520   20960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 17:00:35.659502   20960 ops.go:34] apiserver oom_adj: -16
	I1101 17:00:35.659522   20960 kubeadm.go:631] restartCluster took 10.281164717s
	I1101 17:00:35.659532   20960 kubeadm.go:398] StartCluster complete in 10.31283459s
	I1101 17:00:35.659546   20960 settings.go:142] acquiring lock: {Name:mkdb6df16d9cd02d82e4a95348c412b3d2076fed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 17:00:35.659657   20960 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 17:00:35.660264   20960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/kubeconfig: {Name:mka869f80d5e962d9ffa24675c3f5e3e0593fcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 17:00:35.663761   20960 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-165923" rescaled to 1
	I1101 17:00:35.663802   20960 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 17:00:35.663825   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 17:00:35.704559   20960 out.go:177] * Verifying Kubernetes components...
	I1101 17:00:35.663857   20960 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1101 17:00:35.664030   20960 config.go:180] Loaded profile config "newest-cni-165923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 17:00:35.778581   20960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 17:00:35.778670   20960 addons.go:65] Setting default-storageclass=true in profile "newest-cni-165923"
	I1101 17:00:35.778683   20960 addons.go:65] Setting dashboard=true in profile "newest-cni-165923"
	I1101 17:00:35.778672   20960 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-165923"
	I1101 17:00:35.778730   20960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-165923"
	I1101 17:00:35.778744   20960 addons.go:153] Setting addon dashboard=true in "newest-cni-165923"
	I1101 17:00:35.778747   20960 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-165923"
	W1101 17:00:35.778776   20960 addons.go:162] addon dashboard should already be in state true
	W1101 17:00:35.778780   20960 addons.go:162] addon storage-provisioner should already be in state true
	I1101 17:00:35.778735   20960 addons.go:65] Setting metrics-server=true in profile "newest-cni-165923"
	I1101 17:00:35.778869   20960 addons.go:153] Setting addon metrics-server=true in "newest-cni-165923"
	W1101 17:00:35.778881   20960 addons.go:162] addon metrics-server should already be in state true
	I1101 17:00:35.778944   20960 host.go:66] Checking if "newest-cni-165923" exists ...
	I1101 17:00:35.778946   20960 host.go:66] Checking if "newest-cni-165923" exists ...
	I1101 17:00:35.778954   20960 host.go:66] Checking if "newest-cni-165923" exists ...
	I1101 17:00:35.779437   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:35.780419   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:35.780427   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:35.780574   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:35.835864   20960 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1101 17:00:35.836415   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:35.897331   20960 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1101 17:00:35.903315   20960 addons.go:153] Setting addon default-storageclass=true in "newest-cni-165923"
	I1101 17:00:35.960344   20960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1101 17:00:35.960360   20960 addons.go:162] addon default-storageclass should already be in state true
	I1101 17:00:35.938336   20960 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 17:00:35.981263   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 17:00:35.917031   20960 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 17:00:35.981306   20960 host.go:66] Checking if "newest-cni-165923" exists ...
	I1101 17:00:35.981379   20960 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 17:00:35.981612   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:36.018382   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 17:00:35.982026   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:36.055354   20960 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I1101 17:00:36.018539   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:36.073951   20960 api_server.go:51] waiting for apiserver process to appear ...
	I1101 17:00:36.092243   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 17:00:36.092267   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 17:00:36.092284   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 17:00:36.092412   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:36.112433   20960 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 17:00:36.112457   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 17:00:36.112588   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:36.113627   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:36.121024   20960 api_server.go:71] duration metric: took 457.20422ms to wait for apiserver process to appear ...
	I1101 17:00:36.121052   20960 api_server.go:87] waiting for apiserver healthz status ...
	I1101 17:00:36.121083   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:36.131671   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 200:
	ok
	I1101 17:00:36.134012   20960 api_server.go:140] control plane version: v1.25.3
	I1101 17:00:36.134031   20960 api_server.go:130] duration metric: took 12.971024ms to wait for apiserver health ...
	I1101 17:00:36.134039   20960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 17:00:36.143504   20960 system_pods.go:59] 8 kube-system pods found
	I1101 17:00:36.143531   20960 system_pods.go:61] "coredns-565d847f94-xcxg8" [717409c1-c510-4c65-9a11-56dbb7b6f749] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 17:00:36.143542   20960 system_pods.go:61] "etcd-newest-cni-165923" [190497c4-bc15-495e-8ced-6dbeacfee88b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 17:00:36.143556   20960 system_pods.go:61] "kube-apiserver-newest-cni-165923" [0a46fb50-8536-4e16-8eb0-d176e15dd0f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 17:00:36.143561   20960 system_pods.go:61] "kube-controller-manager-newest-cni-165923" [cbe63ec4-23dd-48b0-85ca-640e4c8e39ca] Running
	I1101 17:00:36.143568   20960 system_pods.go:61] "kube-proxy-sc8lm" [97fb7e69-3c2b-44b7-bd18-97fd44f40b3d] Running
	I1101 17:00:36.143575   20960 system_pods.go:61] "kube-scheduler-newest-cni-165923" [ffecae28-01fb-4023-8426-f9d563720fe9] Running
	I1101 17:00:36.143585   20960 system_pods.go:61] "metrics-server-5c8fd5cf8-d8wg4" [113d513e-e7fd-414b-ab71-82518cb0ff93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 17:00:36.143594   20960 system_pods.go:61] "storage-provisioner" [b3e0f7b1-84e4-4769-b4f7-f4d8e72f9f88] Running
	I1101 17:00:36.143601   20960 system_pods.go:74] duration metric: took 9.556398ms to wait for pod list to return data ...
	I1101 17:00:36.143610   20960 default_sa.go:34] waiting for default service account to be created ...
	I1101 17:00:36.147265   20960 default_sa.go:45] found service account: "default"
	I1101 17:00:36.147279   20960 default_sa.go:55] duration metric: took 3.663363ms for default service account to be created ...
	I1101 17:00:36.147289   20960 kubeadm.go:573] duration metric: took 483.476344ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1101 17:00:36.147307   20960 node_conditions.go:102] verifying NodePressure condition ...
	I1101 17:00:36.152883   20960 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I1101 17:00:36.152914   20960 node_conditions.go:123] node cpu capacity is 6
	I1101 17:00:36.152928   20960 node_conditions.go:105] duration metric: took 5.612155ms to run NodePressure ...
	I1101 17:00:36.152942   20960 start.go:217] waiting for startup goroutines ...
	I1101 17:00:36.187089   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:36.187103   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:36.197625   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:36.321114   20960 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 17:00:36.321127   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I1101 17:00:36.346397   20960 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 17:00:36.346410   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 17:00:36.408728   20960 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 17:00:36.408757   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 17:00:36.417682   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 17:00:36.417697   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 17:00:36.420321   20960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 17:00:36.420365   20960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 17:00:36.498979   20960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 17:00:36.508806   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 17:00:36.508823   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 17:00:36.534626   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 17:00:36.534647   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 17:00:36.631632   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 17:00:36.631644   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I1101 17:00:36.709532   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 17:00:36.709548   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 17:00:36.733743   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 17:00:36.733757   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 17:00:36.814588   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 17:00:36.814607   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 17:00:36.851362   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 17:00:36.851375   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 17:00:36.916016   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 17:00:36.916031   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 17:00:36.934442   20960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 17:00:37.730506   20960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.310173301s)
	I1101 17:00:37.730567   20960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.310201604s)
	I1101 17:00:37.755443   20960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.25644573s)
	I1101 17:00:37.755474   20960 addons.go:383] Verifying addon metrics-server=true in "newest-cni-165923"
	I1101 17:00:37.948561   20960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.014099369s)
	I1101 17:00:37.972765   20960 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1101 17:00:38.009562   20960 addons.go:414] enableAddons completed in 2.345717427s
	I1101 17:00:38.010024   20960 ssh_runner.go:195] Run: rm -f paused
	I1101 17:00:38.057089   20960 start.go:506] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I1101 17:00:38.080669   20960 out.go:177] * Done! kubectl is now configured to use "newest-cni-165923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-11-01 23:43:42 UTC, end at Wed 2022-11-02 00:01:21 UTC. --
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Stopping Docker Application Container Engine...
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[133]: time="2022-11-01T23:43:44.286667976Z" level=info msg="Processing signal 'terminated'"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[133]: time="2022-11-01T23:43:44.287593863Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[133]: time="2022-11-01T23:43:44.288112899Z" level=info msg="Daemon shutdown complete"
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: docker.service: Succeeded.
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Stopped Docker Application Container Engine.
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Starting Docker Application Container Engine...
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.329553637Z" level=info msg="Starting up"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331286692Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331318371Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331337221Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331345474Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332301530Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332334687Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332347311Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332353687Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.336339693Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.342180285Z" level=info msg="Loading containers: start."
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.419278363Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.449439673Z" level=info msg="Loading containers: done."
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.456899068Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.456954410Z" level=info msg="Daemon has completed initialization"
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Started Docker Application Container Engine.
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.478922547Z" level=info msg="API listen on [::]:2376"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.484610172Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-11-02T00:01:23Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  00:01:24 up  1:30,  0 users,  load average: 1.18, 0.90, 0.93
	Linux old-k8s-version-163757 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-11-01 23:43:42 UTC, end at Wed 2022-11-02 00:01:24 UTC. --
	Nov 02 00:01:22 old-k8s-version-163757 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 02 00:01:23 old-k8s-version-163757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 927.
	Nov 02 00:01:23 old-k8s-version-163757 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 02 00:01:23 old-k8s-version-163757 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 02 00:01:23 old-k8s-version-163757 kubelet[24397]: I1102 00:01:23.323819   24397 server.go:410] Version: v1.16.0
	Nov 02 00:01:23 old-k8s-version-163757 kubelet[24397]: I1102 00:01:23.324248   24397 plugins.go:100] No cloud provider specified.
	Nov 02 00:01:23 old-k8s-version-163757 kubelet[24397]: I1102 00:01:23.324290   24397 server.go:773] Client rotation is on, will bootstrap in background
	Nov 02 00:01:23 old-k8s-version-163757 kubelet[24397]: I1102 00:01:23.326807   24397 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 02 00:01:23 old-k8s-version-163757 kubelet[24397]: W1102 00:01:23.327774   24397 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 02 00:01:23 old-k8s-version-163757 kubelet[24397]: W1102 00:01:23.327841   24397 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 02 00:01:23 old-k8s-version-163757 kubelet[24397]: F1102 00:01:23.327866   24397 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 02 00:01:23 old-k8s-version-163757 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 02 00:01:23 old-k8s-version-163757 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 02 00:01:23 old-k8s-version-163757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Nov 02 00:01:23 old-k8s-version-163757 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 02 00:01:23 old-k8s-version-163757 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 02 00:01:24 old-k8s-version-163757 kubelet[24418]: I1102 00:01:24.066643   24418 server.go:410] Version: v1.16.0
	Nov 02 00:01:24 old-k8s-version-163757 kubelet[24418]: I1102 00:01:24.066983   24418 plugins.go:100] No cloud provider specified.
	Nov 02 00:01:24 old-k8s-version-163757 kubelet[24418]: I1102 00:01:24.067033   24418 server.go:773] Client rotation is on, will bootstrap in background
	Nov 02 00:01:24 old-k8s-version-163757 kubelet[24418]: I1102 00:01:24.068793   24418 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 02 00:01:24 old-k8s-version-163757 kubelet[24418]: W1102 00:01:24.069690   24418 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 02 00:01:24 old-k8s-version-163757 kubelet[24418]: W1102 00:01:24.069782   24418 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 02 00:01:24 old-k8s-version-163757 kubelet[24418]: F1102 00:01:24.069852   24418 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 02 00:01:24 old-k8s-version-163757 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 02 00:01:24 old-k8s-version-163757 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 17:01:24.154038   21251 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 2 (396.412586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-163757" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:01:48.367422    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 17:01:49.762540    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:02:19.531618    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:02:35.734878    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:02:44.362124    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 17:02:49.855090    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:02:54.408983    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:03:32.068109    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 17:03:35.423571    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:35.428769    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:35.440557    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:35.462765    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:35.504931    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:35.587104    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:35.749173    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:03:36.071265    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:36.712372    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:37.993983    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:40.556256    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:03:45.676599    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:03:55.918780    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:04:07.417334    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:04:16.399254    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:04:41.644065    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:04:57.359527    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
E1101 17:05:02.328905    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:05:48.892957    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:06:18.527833    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 17:06:19.280974    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:06:25.401221    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:06:48.364373    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 17:06:49.758674    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:07:19.527464    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:07:35.730218    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:07:44.357481    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53981/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1101 17:07:49.853849    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 17:07:54.407130    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1101 17:08:32.065562    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1101 17:08:35.420617    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1101 17:09:03.119568    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/default-k8s-diff-port-165249/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1101 17:09:21.570117    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1101 17:09:41.641177    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1101 17:09:51.408376    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1101 17:10:02.328095    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 2 (394.00624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-163757" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-163757 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-163757 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.11µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-163757 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-163757
helpers_test.go:235: (dbg) docker inspect old-k8s-version-163757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e",
	        "Created": "2022-11-01T23:38:04.256272958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274043,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:43:41.854152852Z",
	            "FinishedAt": "2022-11-01T23:43:38.949849093Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hostname",
	        "HostsPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/hosts",
	        "LogPath": "/var/lib/docker/containers/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e/68479d844c03b1c96d1bf8e75ccb64eb9421923cb4772b8e7887ab9b5e6a873e-json.log",
	        "Name": "/old-k8s-version-163757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-163757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-163757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a-init/diff:/var/lib/docker/overlay2/397c781354d1ae8b5c71df69b26a9a2493cf01723d23317a9b36f56b62ab53f3/diff:/var/lib/docker/overlay2/fe3fd9f7a011255c997093c6f7e1cb70c20cab26db5f52ff8b83c33d58519532/diff:/var/lib/docker/overlay2/f7328bad1e482720081fe1f9d1ab2ee05c71a9060abf63daf63a25e84818f237/diff:/var/lib/docker/overlay2/ca039979ed22affed678394443deee5ed35f2eb49243537b4205433189b87b2c/diff:/var/lib/docker/overlay2/a2ee3e754036b8777f801c847988e78d9b0ef881e82ea7467cef35a1261b9e20/diff:/var/lib/docker/overlay2/3de609efaeca546b0261017a1b19a9fa9ff6c9272609346b897e8075687c3698/diff:/var/lib/docker/overlay2/9101d388c406c87b2d10dc219dc3225ea59bfbedfc167adbfdf7578ed74a528b/diff:/var/lib/docker/overlay2/ba2db849d29a96ccb7729ee8861cfb647a06ba046b1016e99e3c2ef9e7b92675/diff:/var/lib/docker/overlay2/bb7315b5e1884c47eaad6eddfa4e422b1b240ff1d1112deab5ff41e40a12970d/diff:/var/lib/docker/overlay2/25fd1b
7d003c93a7ef576bb052318e940d8e1c8a40db37179b03563a8a099490/diff:/var/lib/docker/overlay2/f22743b1afcc328f7d2c4740efeb1401d6c011f499d200dc16b11a352dfc07f7/diff:/var/lib/docker/overlay2/59ca3268b7b3862516f40c07f313c5cdbe659f949ce4bd6e4eedcfcdd80409b0/diff:/var/lib/docker/overlay2/ce66536b9c7b7d4d38eeb3b0f5842c927c181c4584e60fa25989b9de30ec5856/diff:/var/lib/docker/overlay2/f0bdec7810d2b53f48492f34d7889fdb7c86d692422978de474816cf3bf8e923/diff:/var/lib/docker/overlay2/b0f0a882b23b6635539c83a8a2837c52090aa306e12f64ed83edcd03596f0cde/diff:/var/lib/docker/overlay2/60180139b1a11a94ee6174e6512bad4a5e162470c686d6cc7c91d7c9fb1907a2/diff:/var/lib/docker/overlay2/f1a7c8c448077705a2b48dfccf2f6e599a8ef782efd7d171b349ad43a0cddcae/diff:/var/lib/docker/overlay2/d64e00c1407419f2261e34d0974453ad696f514f79d8ecdac1b8c3a2a117349c/diff:/var/lib/docker/overlay2/7af90e8306e3b3e8ed7d2d67099da7a7cbe0ed97a5b983c84548135857efc4d0/diff:/var/lib/docker/overlay2/85101cd67d726a8a42d8951a230b3acd76d4a62615c6ffe4aac1ebef17ab422d/diff:/var/lib/d
ocker/overlay2/09a5d9c2f9897ae114e76d4aed5af38d250d044b1d274f8dafa0cfd17789ea54/diff:/var/lib/docker/overlay2/a6b97f972b460567b473da6022dd8658db13cb06830fcb676e8c1ebc927e1d44/diff:/var/lib/docker/overlay2/b569cecedfd9b79ea9a49645099405472d529e224ffe4abed0921d9fbec171a7/diff:/var/lib/docker/overlay2/278ceb611708e5dc8e810eaeb6b08b283d298009965d14772f2b61f95355477a/diff:/var/lib/docker/overlay2/c6693259dde0f3190d9019d8aca0c27c980d5c31a40fff8274d2a57d8ef19f41/diff:/var/lib/docker/overlay2/4db1d3b0ba37b1bfa0f486b9c1b327686a1069e2e6cbfc2e279c1f597f7cd346/diff:/var/lib/docker/overlay2/50e4b8ce3599837ac51b108fd983aa9b876f47f3e7253cd0976be8df23c73a33/diff:/var/lib/docker/overlay2/ad2b5d101e83bca01ddb2257701208ceb46b4668f6d14e84ee171975bb6175db/diff:/var/lib/docker/overlay2/746a904e8c69bb992522394e576896d4e35d056023809a58fbac92d497d2968a/diff:/var/lib/docker/overlay2/03794e35d9fe845753f9bcb5648e7a7c1fcf7db9bcd82c7c3824c2142cb8a2b6/diff:/var/lib/docker/overlay2/75caadeb2dfb8cc524a4e0f9d7862ccf017f755a24e00453f5a85eb29a5
837de/diff:/var/lib/docker/overlay2/1a5ce4ae9316bb13d1739267bf6b30a17188ca9ac127663735bfac3d15e50abe/diff:/var/lib/docker/overlay2/fa61eaf7b77e6fa75456860b8b75e4779478979f9b4ad94cd62eadd22743421e/diff:/var/lib/docker/overlay2/9c1cd4fe6bd059e33f020198f5ff305dab3f4b102b14b5894c76cae7dc769b92/diff:/var/lib/docker/overlay2/46cf92e0e9cc79002bfb0f5c2e0ab28c771f260b3fea2cb434cd84d3a1ea7659/diff:/var/lib/docker/overlay2/b47be14a30a9c0339a3a49b552cad979169d6c9a909e7837759a155b4c74d128/diff:/var/lib/docker/overlay2/598716c3d9ddb5de953d6a462fc1af49f742bbe02fd1c01f7d548a9f93d3913d/diff:/var/lib/docker/overlay2/cd665df1518202898f79e694456b55b64d6095a28556be2dc545241df7633be7/diff:/var/lib/docker/overlay2/909b0f879f4ce91be83bada76dad0599c2839fa8a6534f976ee095ad44dce7c6/diff:/var/lib/docker/overlay2/fd78ebbf3c4baf9a9f0036cb0ed9a8908a05f2e78572d88fcb3f026cb000710b/diff:/var/lib/docker/overlay2/8a030c72fc8571d3240e0ab2d2aea23b84385f28f3ef2dd82b5be5b925dbca5b/diff:/var/lib/docker/overlay2/d87a4221a646268a958798509b8c3cb343463c
c8427ae96a424f653a0a4508c7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f6a16dc08ba19e57a68847f0d61ac0a698154f4025d83160d63955526b87a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-163757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-163757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-163757",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-163757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a78b66e11436bdcef5ae4e878d76bd762a44be207b062530209a62e8ac180eb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53982"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53983"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53984"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53981"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6a78b66e1143",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-163757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "68479d844c03",
	                        "old-k8s-version-163757"
	                    ],
	                    "NetworkID": "de11f6b0d4a3e9909764ae953f0f910d0d29438f96300416f12a7f896caa0f32",
	                    "EndpointID": "d35627681b46bada56d185972cf0b735b505b074234eaf79ad5bd6396bcc6bec",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 2 (389.932227ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-163757 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-163757 logs -n 25: (3.41542392s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-164600                                      | embed-certs-164600           | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-164600                                      | embed-certs-164600           | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-164600                                      | embed-certs-164600           | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	| delete  | -p embed-certs-164600                                      | embed-certs-164600           | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	| delete  | -p                                                         | disable-driver-mounts-165249 | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:52 PDT |
	|         | disable-driver-mounts-165249                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:52 PDT | 01 Nov 22 16:53 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:53 PDT | 01 Nov 22 16:53 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:53 PDT | 01 Nov 22 16:53 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-165249           | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:53 PDT | 01 Nov 22 16:53 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:53 PDT | 01 Nov 22 16:58 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-165249 | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 16:59 PDT |
	|         | default-k8s-diff-port-165249                               |                              |         |         |                     |                     |
	| start   | -p newest-cni-165923 --memory=2200 --alsologtostderr       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 16:59 PDT | 01 Nov 22 17:00 PDT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-165923                 | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-165923                      | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-165923 --memory=2200 --alsologtostderr       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-165923 sudo                                  | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	| delete  | -p newest-cni-165923                                       | newest-cni-165923            | jenkins | v1.27.1 | 01 Nov 22 17:00 PDT | 01 Nov 22 17:00 PDT |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 17:00:20
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 17:00:20.388122   20960 out.go:296] Setting OutFile to fd 1 ...
	I1101 17:00:20.388311   20960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 17:00:20.388317   20960 out.go:309] Setting ErrFile to fd 2...
	I1101 17:00:20.388322   20960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 17:00:20.388444   20960 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 17:00:20.389013   20960 out.go:303] Setting JSON to false
	I1101 17:00:20.407630   20960 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5395,"bootTime":1667341825,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 17:00:20.407763   20960 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 17:00:20.429679   20960 out.go:177] * [newest-cni-165923] minikube v1.27.1 on Darwin 13.0
	I1101 17:00:20.451290   20960 notify.go:220] Checking for updates...
	I1101 17:00:20.473429   20960 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 17:00:20.495391   20960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 17:00:20.517201   20960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 17:00:20.538655   20960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 17:00:20.581201   20960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 17:00:20.604975   20960 config.go:180] Loaded profile config "newest-cni-165923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 17:00:20.605638   20960 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 17:00:20.668062   20960 docker.go:137] docker version: linux-20.10.20
	I1101 17:00:20.668217   20960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 17:00:20.809840   20960 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-02 00:00:20.740340094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 17:00:20.853179   20960 out.go:177] * Using the docker driver based on existing profile
	I1101 17:00:20.874668   20960 start.go:282] selected driver: docker
	I1101 17:00:20.874698   20960 start.go:808] validating driver "docker" against &{Name:newest-cni-165923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-165923 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 17:00:20.874861   20960 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 17:00:20.878753   20960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 17:00:21.019419   20960 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-02 00:00:20.95094078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 17:00:21.019596   20960 start_flags.go:907] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 17:00:21.019617   20960 cni.go:95] Creating CNI manager for ""
	I1101 17:00:21.019626   20960 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 17:00:21.019640   20960 start_flags.go:317] config:
	{Name:newest-cni-165923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-165923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 17:00:21.061974   20960 out.go:177] * Starting control plane node newest-cni-165923 in cluster newest-cni-165923
	I1101 17:00:21.083144   20960 cache.go:120] Beginning downloading kic base image for docker with docker
	I1101 17:00:21.104232   20960 out.go:177] * Pulling base image ...
	I1101 17:00:21.148245   20960 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1101 17:00:21.148276   20960 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 17:00:21.148347   20960 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1101 17:00:21.148370   20960 cache.go:57] Caching tarball of preloaded images
	I1101 17:00:21.149303   20960 preload.go:174] Found /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 17:00:21.149422   20960 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1101 17:00:21.149865   20960 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/config.json ...
	I1101 17:00:21.205136   20960 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 17:00:21.205153   20960 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 17:00:21.205162   20960 cache.go:208] Successfully downloaded all kic artifacts
	I1101 17:00:21.205215   20960 start.go:364] acquiring machines lock for newest-cni-165923: {Name:mkc0aae0e96bf69787cefd62e998860d82986621 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 17:00:21.205301   20960 start.go:368] acquired machines lock for "newest-cni-165923" in 65.397µs
	I1101 17:00:21.205335   20960 start.go:96] Skipping create...Using existing machine configuration
	I1101 17:00:21.205346   20960 fix.go:55] fixHost starting: 
	I1101 17:00:21.205631   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:21.262841   20960 fix.go:103] recreateIfNeeded on newest-cni-165923: state=Stopped err=<nil>
	W1101 17:00:21.262877   20960 fix.go:129] unexpected machine state, will restart: <nil>
	I1101 17:00:21.306194   20960 out.go:177] * Restarting existing docker container for "newest-cni-165923" ...
	I1101 17:00:21.327814   20960 cli_runner.go:164] Run: docker start newest-cni-165923
	I1101 17:00:21.658055   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:21.719727   20960 kic.go:415] container "newest-cni-165923" state is running.
	I1101 17:00:21.720399   20960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-165923
	I1101 17:00:21.785189   20960 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/config.json ...
	I1101 17:00:21.785878   20960 machine.go:88] provisioning docker machine ...
	I1101 17:00:21.785909   20960 ubuntu.go:169] provisioning hostname "newest-cni-165923"
	I1101 17:00:21.786006   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:21.862346   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:21.862650   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:21.862700   20960 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-165923 && echo "newest-cni-165923" | sudo tee /etc/hostname
	I1101 17:00:21.998520   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-165923
	
	I1101 17:00:21.998620   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:22.059881   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:22.060049   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:22.060062   20960 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-165923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-165923/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-165923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 17:00:22.177190   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 17:00:22.177208   20960 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15232-2108/.minikube CaCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15232-2108/.minikube}
	I1101 17:00:22.177238   20960 ubuntu.go:177] setting up certificates
	I1101 17:00:22.177247   20960 provision.go:83] configureAuth start
	I1101 17:00:22.177342   20960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-165923
	I1101 17:00:22.240837   20960 provision.go:138] copyHostCerts
	I1101 17:00:22.240953   20960 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem, removing ...
	I1101 17:00:22.240963   20960 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
	I1101 17:00:22.241062   20960 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem (1082 bytes)
	I1101 17:00:22.241283   20960 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem, removing ...
	I1101 17:00:22.241292   20960 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
	I1101 17:00:22.241356   20960 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem (1123 bytes)
	I1101 17:00:22.241519   20960 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem, removing ...
	I1101 17:00:22.241525   20960 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
	I1101 17:00:22.241585   20960 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem (1675 bytes)
	I1101 17:00:22.241731   20960 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem org=jenkins.newest-cni-165923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-165923]
	I1101 17:00:22.355775   20960 provision.go:172] copyRemoteCerts
	I1101 17:00:22.355853   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 17:00:22.355928   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:22.418848   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:22.505987   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 17:00:22.524841   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 17:00:22.544114   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 17:00:22.565533   20960 provision.go:86] duration metric: configureAuth took 388.27009ms
	I1101 17:00:22.565549   20960 ubuntu.go:193] setting minikube options for container-runtime
	I1101 17:00:22.565746   20960 config.go:180] Loaded profile config "newest-cni-165923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 17:00:22.565833   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:22.632727   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:22.632897   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:22.632906   20960 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1101 17:00:22.751500   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1101 17:00:22.751512   20960 ubuntu.go:71] root file system type: overlay
	I1101 17:00:22.751640   20960 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1101 17:00:22.751748   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:22.809637   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:22.809791   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:22.809844   20960 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1101 17:00:22.937679   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1101 17:00:22.937809   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.000892   20960 main.go:134] libmachine: Using SSH client type: native
	I1101 17:00:23.001049   20960 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54981 <nil> <nil>}
	I1101 17:00:23.001062   20960 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1101 17:00:23.122607   20960 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 17:00:23.122625   20960 machine.go:91] provisioned docker machine in 1.336750573s
	I1101 17:00:23.122636   20960 start.go:300] post-start starting for "newest-cni-165923" (driver="docker")
	I1101 17:00:23.122641   20960 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 17:00:23.122732   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 17:00:23.122797   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.181387   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:23.268392   20960 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 17:00:23.271666   20960 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 17:00:23.271682   20960 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 17:00:23.271688   20960 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 17:00:23.271693   20960 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 17:00:23.271701   20960 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/addons for local assets ...
	I1101 17:00:23.271791   20960 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/files for local assets ...
	I1101 17:00:23.271958   20960 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> 34132.pem in /etc/ssl/certs
	I1101 17:00:23.272135   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 17:00:23.279120   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /etc/ssl/certs/34132.pem (1708 bytes)
	I1101 17:00:23.296995   20960 start.go:303] post-start completed in 174.35189ms
	I1101 17:00:23.297082   20960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 17:00:23.297150   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.359937   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:23.446667   20960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 17:00:23.451319   20960 fix.go:57] fixHost completed within 2.24599353s
	I1101 17:00:23.451330   20960 start.go:83] releasing machines lock for "newest-cni-165923", held for 2.246044356s
	I1101 17:00:23.451419   20960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-165923
	I1101 17:00:23.511174   20960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 17:00:23.511209   20960 ssh_runner.go:195] Run: systemctl --version
	I1101 17:00:23.511268   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.511279   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:23.575633   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:23.576195   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:23.661055   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1101 17:00:23.722646   20960 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1101 17:00:23.735601   20960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 17:00:23.805172   20960 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1101 17:00:23.881625   20960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1101 17:00:23.892299   20960 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1101 17:00:23.892375   20960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 17:00:23.902503   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 17:00:23.916375   20960 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1101 17:00:23.984133   20960 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1101 17:00:24.042903   20960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 17:00:24.118282   20960 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1101 17:00:24.345870   20960 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1101 17:00:24.420017   20960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 17:00:24.491696   20960 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1101 17:00:24.502155   20960 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1101 17:00:24.502239   20960 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1101 17:00:24.506138   20960 start.go:472] Will wait 60s for crictl version
	I1101 17:00:24.506182   20960 ssh_runner.go:195] Run: sudo crictl version
	I1101 17:00:24.535836   20960 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1101 17:00:24.535928   20960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 17:00:24.563544   20960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1101 17:00:24.616098   20960 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1101 17:00:24.616243   20960 cli_runner.go:164] Run: docker exec -t newest-cni-165923 dig +short host.docker.internal
	I1101 17:00:24.730007   20960 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1101 17:00:24.730128   20960 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1101 17:00:24.734602   20960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 17:00:24.744543   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:24.825157   20960 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1101 17:00:24.846808   20960 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1101 17:00:24.846917   20960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 17:00:24.871420   20960 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1101 17:00:24.871439   20960 docker.go:543] Images already preloaded, skipping extraction
	I1101 17:00:24.871535   20960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1101 17:00:24.897183   20960 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1101 17:00:24.897203   20960 cache_images.go:84] Images are preloaded, skipping loading
	I1101 17:00:24.897335   20960 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1101 17:00:24.968938   20960 cni.go:95] Creating CNI manager for ""
	I1101 17:00:24.968954   20960 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 17:00:24.968969   20960 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1101 17:00:24.968980   20960 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-165923 NodeName:newest-cni-165923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArg
s:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 17:00:24.969095   20960 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-165923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 17:00:24.969175   20960 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-165923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-165923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 17:00:24.969267   20960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1101 17:00:24.977648   20960 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 17:00:24.977710   20960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 17:00:24.985705   20960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I1101 17:00:24.999502   20960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 17:00:25.013824   20960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1101 17:00:25.028100   20960 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1101 17:00:25.032024   20960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 17:00:25.041948   20960 certs.go:54] Setting up /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923 for IP: 192.168.67.2
	I1101 17:00:25.042078   20960 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key
	I1101 17:00:25.042204   20960 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key
	I1101 17:00:25.042373   20960 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/client.key
	I1101 17:00:25.042483   20960 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/apiserver.key.c7fa3a9e
	I1101 17:00:25.042548   20960 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/proxy-client.key
	I1101 17:00:25.042876   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem (1338 bytes)
	W1101 17:00:25.042921   20960 certs.go:384] ignoring /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413_empty.pem, impossibly tiny 0 bytes
	I1101 17:00:25.042933   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 17:00:25.042974   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem (1082 bytes)
	I1101 17:00:25.043015   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem (1123 bytes)
	I1101 17:00:25.043058   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem (1675 bytes)
	I1101 17:00:25.043145   20960 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem (1708 bytes)
	I1101 17:00:25.043775   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 17:00:25.063851   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 17:00:25.083046   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 17:00:25.102623   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/newest-cni-165923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 17:00:25.121193   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 17:00:25.140789   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 17:00:25.161072   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 17:00:25.180224   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 17:00:25.199382   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 17:00:25.219039   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem --> /usr/share/ca-certificates/3413.pem (1338 bytes)
	I1101 17:00:25.238154   20960 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /usr/share/ca-certificates/34132.pem (1708 bytes)
	I1101 17:00:25.255476   20960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 17:00:25.267914   20960 ssh_runner.go:195] Run: openssl version
	I1101 17:00:25.273733   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34132.pem && ln -fs /usr/share/ca-certificates/34132.pem /etc/ssl/certs/34132.pem"
	I1101 17:00:25.281611   20960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34132.pem
	I1101 17:00:25.285781   20960 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:49 /usr/share/ca-certificates/34132.pem
	I1101 17:00:25.285834   20960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34132.pem
	I1101 17:00:25.291247   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34132.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 17:00:25.298552   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 17:00:25.306278   20960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 17:00:25.310497   20960 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 17:00:25.310543   20960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 17:00:25.315692   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 17:00:25.322849   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3413.pem && ln -fs /usr/share/ca-certificates/3413.pem /etc/ssl/certs/3413.pem"
	I1101 17:00:25.330360   20960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3413.pem
	I1101 17:00:25.334190   20960 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:49 /usr/share/ca-certificates/3413.pem
	I1101 17:00:25.334257   20960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3413.pem
	I1101 17:00:25.339462   20960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3413.pem /etc/ssl/certs/51391683.0"
	I1101 17:00:25.346804   20960 kubeadm.go:396] StartCluster: {Name:newest-cni-165923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-165923 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 17:00:25.346924   20960 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 17:00:25.370462   20960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 17:00:25.378432   20960 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1101 17:00:25.378446   20960 kubeadm.go:627] restartCluster start
	I1101 17:00:25.378502   20960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 17:00:25.385251   20960 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:25.385332   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:25.447732   20960 kubeconfig.go:135] verify returned: extract IP: "newest-cni-165923" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 17:00:25.447908   20960 kubeconfig.go:146] "newest-cni-165923" context is missing from /Users/jenkins/minikube-integration/15232-2108/kubeconfig - will repair!
	I1101 17:00:25.448257   20960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/kubeconfig: {Name:mka869f80d5e962d9ffa24675c3f5e3e0593fcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 17:00:25.449550   20960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 17:00:25.457571   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:25.457637   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:25.465940   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:25.667187   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:25.667365   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:25.678269   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:25.868056   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:25.868241   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:25.878717   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.067407   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.067559   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.078346   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.266103   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.266248   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.277213   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.467823   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.468101   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.478520   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.668066   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.668248   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.679669   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:26.868036   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:26.868215   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:26.879039   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.066098   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.066241   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.075351   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.266872   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.267011   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.277765   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.466967   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.467168   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.477750   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.668055   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.668276   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.678936   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:27.866427   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:27.866595   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:27.876854   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.067336   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:28.067476   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:28.078303   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.266193   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:28.266387   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:28.276883   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.467734   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:28.467866   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:28.478214   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.478223   20960 api_server.go:165] Checking apiserver status ...
	I1101 17:00:28.478280   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1101 17:00:28.486116   20960 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.486133   20960 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1101 17:00:28.486140   20960 kubeadm.go:1114] stopping kube-system containers ...
	I1101 17:00:28.486229   20960 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1101 17:00:28.510374   20960 docker.go:444] Stopping containers: [83d5a0fae290 c5474d9301c0 a88748f79477 9e3fbb234296 a13c76f0b959 b1c7f0a2d66c f82a1683cea1 af5efabcae2e 9fc33b9b9edc 9f4a0258f00c 26996445c6d0 68a15afdb4a0 9a6b5025122c 786dec75c2c4 a7e0034e24e2 c7c424e35d97 ce88b7d87dc1]
	I1101 17:00:28.510482   20960 ssh_runner.go:195] Run: docker stop 83d5a0fae290 c5474d9301c0 a88748f79477 9e3fbb234296 a13c76f0b959 b1c7f0a2d66c f82a1683cea1 af5efabcae2e 9fc33b9b9edc 9f4a0258f00c 26996445c6d0 68a15afdb4a0 9a6b5025122c 786dec75c2c4 a7e0034e24e2 c7c424e35d97 ce88b7d87dc1
	I1101 17:00:28.534513   20960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 17:00:28.544548   20960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 17:00:28.552204   20960 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  1 23:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  1 23:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  1 23:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  1 23:59 /etc/kubernetes/scheduler.conf
	
	I1101 17:00:28.552278   20960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 17:00:28.560657   20960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 17:00:28.569191   20960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 17:00:28.577684   20960 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.577804   20960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 17:00:28.585701   20960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 17:00:28.595185   20960 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 17:00:28.595262   20960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 17:00:28.603363   20960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 17:00:28.611281   20960 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 17:00:28.611317   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:28.661432   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:29.606537   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:29.736012   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:29.791708   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:29.904878   20960 api_server.go:51] waiting for apiserver process to appear ...
	I1101 17:00:29.904965   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 17:00:30.418888   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 17:00:30.919012   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 17:00:30.933667   20960 api_server.go:71] duration metric: took 1.028799891s to wait for apiserver process to appear ...
	I1101 17:00:30.933684   20960 api_server.go:87] waiting for apiserver healthz status ...
	I1101 17:00:30.933697   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:30.935291   20960 api_server.go:268] stopped: https://127.0.0.1:54985/healthz: Get "https://127.0.0.1:54985/healthz": EOF
	I1101 17:00:31.437226   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:34.216539   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 17:00:34.216565   20960 api_server.go:102] status: https://127.0.0.1:54985/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 17:00:34.436926   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:34.444116   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 17:00:34.444137   20960 api_server.go:102] status: https://127.0.0.1:54985/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 17:00:34.935843   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:34.944155   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 17:00:34.944180   20960 api_server.go:102] status: https://127.0.0.1:54985/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 17:00:35.436644   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:35.443427   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 200:
	ok
	I1101 17:00:35.452114   20960 api_server.go:140] control plane version: v1.25.3
	I1101 17:00:35.452131   20960 api_server.go:130] duration metric: took 4.518484953s to wait for apiserver health ...
	I1101 17:00:35.452139   20960 cni.go:95] Creating CNI manager for ""
	I1101 17:00:35.452145   20960 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 17:00:35.452159   20960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 17:00:35.460397   20960 system_pods.go:59] 8 kube-system pods found
	I1101 17:00:35.460417   20960 system_pods.go:61] "coredns-565d847f94-xcxg8" [717409c1-c510-4c65-9a11-56dbb7b6f749] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 17:00:35.460423   20960 system_pods.go:61] "etcd-newest-cni-165923" [190497c4-bc15-495e-8ced-6dbeacfee88b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 17:00:35.460428   20960 system_pods.go:61] "kube-apiserver-newest-cni-165923" [0a46fb50-8536-4e16-8eb0-d176e15dd0f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 17:00:35.460433   20960 system_pods.go:61] "kube-controller-manager-newest-cni-165923" [cbe63ec4-23dd-48b0-85ca-640e4c8e39ca] Running
	I1101 17:00:35.460436   20960 system_pods.go:61] "kube-proxy-sc8lm" [97fb7e69-3c2b-44b7-bd18-97fd44f40b3d] Running
	I1101 17:00:35.460441   20960 system_pods.go:61] "kube-scheduler-newest-cni-165923" [ffecae28-01fb-4023-8426-f9d563720fe9] Running
	I1101 17:00:35.460447   20960 system_pods.go:61] "metrics-server-5c8fd5cf8-d8wg4" [113d513e-e7fd-414b-ab71-82518cb0ff93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 17:00:35.460452   20960 system_pods.go:61] "storage-provisioner" [b3e0f7b1-84e4-4769-b4f7-f4d8e72f9f88] Running
	I1101 17:00:35.460457   20960 system_pods.go:74] duration metric: took 8.292718ms to wait for pod list to return data ...
	I1101 17:00:35.460463   20960 node_conditions.go:102] verifying NodePressure condition ...
	I1101 17:00:35.464649   20960 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I1101 17:00:35.464666   20960 node_conditions.go:123] node cpu capacity is 6
	I1101 17:00:35.464676   20960 node_conditions.go:105] duration metric: took 4.209939ms to run NodePressure ...
	I1101 17:00:35.464695   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 17:00:35.650520   20960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 17:00:35.659502   20960 ops.go:34] apiserver oom_adj: -16
	I1101 17:00:35.659522   20960 kubeadm.go:631] restartCluster took 10.281164717s
	I1101 17:00:35.659532   20960 kubeadm.go:398] StartCluster complete in 10.31283459s
	I1101 17:00:35.659546   20960 settings.go:142] acquiring lock: {Name:mkdb6df16d9cd02d82e4a95348c412b3d2076fed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 17:00:35.659657   20960 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 17:00:35.660264   20960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/kubeconfig: {Name:mka869f80d5e962d9ffa24675c3f5e3e0593fcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 17:00:35.663761   20960 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-165923" rescaled to 1
	I1101 17:00:35.663802   20960 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1101 17:00:35.663825   20960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 17:00:35.704559   20960 out.go:177] * Verifying Kubernetes components...
	I1101 17:00:35.663857   20960 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1101 17:00:35.664030   20960 config.go:180] Loaded profile config "newest-cni-165923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 17:00:35.778581   20960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 17:00:35.778670   20960 addons.go:65] Setting default-storageclass=true in profile "newest-cni-165923"
	I1101 17:00:35.778683   20960 addons.go:65] Setting dashboard=true in profile "newest-cni-165923"
	I1101 17:00:35.778672   20960 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-165923"
	I1101 17:00:35.778730   20960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-165923"
	I1101 17:00:35.778744   20960 addons.go:153] Setting addon dashboard=true in "newest-cni-165923"
	I1101 17:00:35.778747   20960 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-165923"
	W1101 17:00:35.778776   20960 addons.go:162] addon dashboard should already be in state true
	W1101 17:00:35.778780   20960 addons.go:162] addon storage-provisioner should already be in state true
	I1101 17:00:35.778735   20960 addons.go:65] Setting metrics-server=true in profile "newest-cni-165923"
	I1101 17:00:35.778869   20960 addons.go:153] Setting addon metrics-server=true in "newest-cni-165923"
	W1101 17:00:35.778881   20960 addons.go:162] addon metrics-server should already be in state true
	I1101 17:00:35.778944   20960 host.go:66] Checking if "newest-cni-165923" exists ...
	I1101 17:00:35.778946   20960 host.go:66] Checking if "newest-cni-165923" exists ...
	I1101 17:00:35.778954   20960 host.go:66] Checking if "newest-cni-165923" exists ...
	I1101 17:00:35.779437   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:35.780419   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:35.780427   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:35.780574   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:35.835864   20960 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1101 17:00:35.836415   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:35.897331   20960 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1101 17:00:35.903315   20960 addons.go:153] Setting addon default-storageclass=true in "newest-cni-165923"
	I1101 17:00:35.960344   20960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1101 17:00:35.960360   20960 addons.go:162] addon default-storageclass should already be in state true
	I1101 17:00:35.938336   20960 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 17:00:35.981263   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 17:00:35.917031   20960 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 17:00:35.981306   20960 host.go:66] Checking if "newest-cni-165923" exists ...
	I1101 17:00:35.981379   20960 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 17:00:35.981612   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:36.018382   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 17:00:35.982026   20960 cli_runner.go:164] Run: docker container inspect newest-cni-165923 --format={{.State.Status}}
	I1101 17:00:36.055354   20960 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I1101 17:00:36.018539   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:36.073951   20960 api_server.go:51] waiting for apiserver process to appear ...
	I1101 17:00:36.092243   20960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 17:00:36.092267   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 17:00:36.092284   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 17:00:36.092412   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:36.112433   20960 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 17:00:36.112457   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 17:00:36.112588   20960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-165923
	I1101 17:00:36.113627   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:36.121024   20960 api_server.go:71] duration metric: took 457.20422ms to wait for apiserver process to appear ...
	I1101 17:00:36.121052   20960 api_server.go:87] waiting for apiserver healthz status ...
	I1101 17:00:36.121083   20960 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54985/healthz ...
	I1101 17:00:36.131671   20960 api_server.go:278] https://127.0.0.1:54985/healthz returned 200:
	ok
	I1101 17:00:36.134012   20960 api_server.go:140] control plane version: v1.25.3
	I1101 17:00:36.134031   20960 api_server.go:130] duration metric: took 12.971024ms to wait for apiserver health ...
	I1101 17:00:36.134039   20960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 17:00:36.143504   20960 system_pods.go:59] 8 kube-system pods found
	I1101 17:00:36.143531   20960 system_pods.go:61] "coredns-565d847f94-xcxg8" [717409c1-c510-4c65-9a11-56dbb7b6f749] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 17:00:36.143542   20960 system_pods.go:61] "etcd-newest-cni-165923" [190497c4-bc15-495e-8ced-6dbeacfee88b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 17:00:36.143556   20960 system_pods.go:61] "kube-apiserver-newest-cni-165923" [0a46fb50-8536-4e16-8eb0-d176e15dd0f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 17:00:36.143561   20960 system_pods.go:61] "kube-controller-manager-newest-cni-165923" [cbe63ec4-23dd-48b0-85ca-640e4c8e39ca] Running
	I1101 17:00:36.143568   20960 system_pods.go:61] "kube-proxy-sc8lm" [97fb7e69-3c2b-44b7-bd18-97fd44f40b3d] Running
	I1101 17:00:36.143575   20960 system_pods.go:61] "kube-scheduler-newest-cni-165923" [ffecae28-01fb-4023-8426-f9d563720fe9] Running
	I1101 17:00:36.143585   20960 system_pods.go:61] "metrics-server-5c8fd5cf8-d8wg4" [113d513e-e7fd-414b-ab71-82518cb0ff93] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 17:00:36.143594   20960 system_pods.go:61] "storage-provisioner" [b3e0f7b1-84e4-4769-b4f7-f4d8e72f9f88] Running
	I1101 17:00:36.143601   20960 system_pods.go:74] duration metric: took 9.556398ms to wait for pod list to return data ...
	I1101 17:00:36.143610   20960 default_sa.go:34] waiting for default service account to be created ...
	I1101 17:00:36.147265   20960 default_sa.go:45] found service account: "default"
	I1101 17:00:36.147279   20960 default_sa.go:55] duration metric: took 3.663363ms for default service account to be created ...
	I1101 17:00:36.147289   20960 kubeadm.go:573] duration metric: took 483.476344ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1101 17:00:36.147307   20960 node_conditions.go:102] verifying NodePressure condition ...
	I1101 17:00:36.152883   20960 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I1101 17:00:36.152914   20960 node_conditions.go:123] node cpu capacity is 6
	I1101 17:00:36.152928   20960 node_conditions.go:105] duration metric: took 5.612155ms to run NodePressure ...
	I1101 17:00:36.152942   20960 start.go:217] waiting for startup goroutines ...
	I1101 17:00:36.187089   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:36.187103   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:36.197625   20960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54981 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/newest-cni-165923/id_rsa Username:docker}
	I1101 17:00:36.321114   20960 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 17:00:36.321127   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I1101 17:00:36.346397   20960 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 17:00:36.346410   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 17:00:36.408728   20960 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 17:00:36.408757   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 17:00:36.417682   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 17:00:36.417697   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 17:00:36.420321   20960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 17:00:36.420365   20960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 17:00:36.498979   20960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 17:00:36.508806   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 17:00:36.508823   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 17:00:36.534626   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 17:00:36.534647   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 17:00:36.631632   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 17:00:36.631644   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I1101 17:00:36.709532   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 17:00:36.709548   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 17:00:36.733743   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 17:00:36.733757   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 17:00:36.814588   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 17:00:36.814607   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 17:00:36.851362   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 17:00:36.851375   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 17:00:36.916016   20960 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 17:00:36.916031   20960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 17:00:36.934442   20960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 17:00:37.730506   20960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.310173301s)
	I1101 17:00:37.730567   20960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.310201604s)
	I1101 17:00:37.755443   20960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.25644573s)
	I1101 17:00:37.755474   20960 addons.go:383] Verifying addon metrics-server=true in "newest-cni-165923"
	I1101 17:00:37.948561   20960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.014099369s)
	I1101 17:00:37.972765   20960 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1101 17:00:38.009562   20960 addons.go:414] enableAddons completed in 2.345717427s
	I1101 17:00:38.010024   20960 ssh_runner.go:195] Run: rm -f paused
	I1101 17:00:38.057089   20960 start.go:506] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I1101 17:00:38.080669   20960 out.go:177] * Done! kubectl is now configured to use "newest-cni-165923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-11-01 23:43:42 UTC, end at Wed 2022-11-02 00:10:36 UTC. --
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Stopping Docker Application Container Engine...
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[133]: time="2022-11-01T23:43:44.286667976Z" level=info msg="Processing signal 'terminated'"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[133]: time="2022-11-01T23:43:44.287593863Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[133]: time="2022-11-01T23:43:44.288112899Z" level=info msg="Daemon shutdown complete"
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: docker.service: Succeeded.
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Stopped Docker Application Container Engine.
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Starting Docker Application Container Engine...
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.329553637Z" level=info msg="Starting up"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331286692Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331318371Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331337221Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.331345474Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332301530Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332334687Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332347311Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.332353687Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.336339693Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.342180285Z" level=info msg="Loading containers: start."
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.419278363Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.449439673Z" level=info msg="Loading containers: done."
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.456899068Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.456954410Z" level=info msg="Daemon has completed initialization"
	Nov 01 23:43:44 old-k8s-version-163757 systemd[1]: Started Docker Application Container Engine.
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.478922547Z" level=info msg="API listen on [::]:2376"
	Nov 01 23:43:44 old-k8s-version-163757 dockerd[425]: time="2022-11-01T23:43:44.484610172Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-11-02T00:10:38Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  00:10:39 up  1:40,  0 users,  load average: 0.51, 0.57, 0.72
	Linux old-k8s-version-163757 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-11-01 23:43:42 UTC, end at Wed 2022-11-02 00:10:39 UTC. --
	Nov 02 00:10:37 old-k8s-version-163757 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1666.
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34092]: I1102 00:10:38.175315   34092 server.go:410] Version: v1.16.0
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34092]: I1102 00:10:38.175433   34092 plugins.go:100] No cloud provider specified.
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34092]: I1102 00:10:38.175446   34092 server.go:773] Client rotation is on, will bootstrap in background
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34092]: I1102 00:10:38.177129   34092 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34092]: W1102 00:10:38.177902   34092 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34092]: W1102 00:10:38.177967   34092 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34092]: F1102 00:10:38.177991   34092 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34124]: I1102 00:10:38.933520   34124 server.go:410] Version: v1.16.0
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34124]: I1102 00:10:38.934147   34124 plugins.go:100] No cloud provider specified.
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34124]: I1102 00:10:38.934166   34124 server.go:773] Client rotation is on, will bootstrap in background
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34124]: I1102 00:10:38.936146   34124 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34124]: W1102 00:10:38.936827   34124 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34124]: W1102 00:10:38.936894   34124 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 02 00:10:38 old-k8s-version-163757 kubelet[34124]: F1102 00:10:38.936917   34124 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 02 00:10:38 old-k8s-version-163757 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 17:10:38.869263   21815 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 2 (395.095446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-163757" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.71s)

                                                
                                    

Test pass (262/295)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.9
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.25.3/json-events 4.49
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.31
16 TestDownloadOnly/DeleteAll 0.68
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
18 TestDownloadOnlyKic 18.76
19 TestBinaryMirror 1.65
20 TestOffline 57.1
22 TestAddons/Setup 149.75
26 TestAddons/parallel/MetricsServer 5.76
27 TestAddons/parallel/HelmTiller 11.92
29 TestAddons/parallel/CSI 39.21
30 TestAddons/parallel/Headlamp 10.35
31 TestAddons/parallel/CloudSpanner 5.46
33 TestAddons/serial/GCPAuth 16.22
34 TestAddons/StoppedEnableDisable 12.97
35 TestCertOptions 31.03
36 TestCertExpiration 238.41
37 TestDockerFlags 30.83
38 TestForceSystemdFlag 31.6
39 TestForceSystemdEnv 30.99
41 TestHyperKitDriverInstallOrUpdate 10.54
44 TestErrorSpam/setup 27.01
45 TestErrorSpam/start 2.32
46 TestErrorSpam/status 1.25
47 TestErrorSpam/pause 1.84
48 TestErrorSpam/unpause 1.89
49 TestErrorSpam/stop 13
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 52.38
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 58.11
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.08
60 TestFunctional/serial/CacheCmd/cache/add_remote 5.89
61 TestFunctional/serial/CacheCmd/cache/add_local 1.83
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
63 TestFunctional/serial/CacheCmd/cache/list 0.08
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.48
66 TestFunctional/serial/CacheCmd/cache/delete 0.16
67 TestFunctional/serial/MinikubeKubectlCmd 0.5
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.67
69 TestFunctional/serial/ExtraConfig 51.6
70 TestFunctional/serial/ComponentHealth 0.06
71 TestFunctional/serial/LogsCmd 2.96
72 TestFunctional/serial/LogsFileCmd 3.07
74 TestFunctional/parallel/ConfigCmd 0.49
75 TestFunctional/parallel/DashboardCmd 13.31
76 TestFunctional/parallel/DryRun 1.4
77 TestFunctional/parallel/InternationalLanguage 0.59
78 TestFunctional/parallel/StatusCmd 1.53
81 TestFunctional/parallel/ServiceCmd 20.89
83 TestFunctional/parallel/AddonsCmd 0.26
84 TestFunctional/parallel/PersistentVolumeClaim 28.55
86 TestFunctional/parallel/SSHCmd 0.82
87 TestFunctional/parallel/CpCmd 2.02
88 TestFunctional/parallel/MySQL 27.42
89 TestFunctional/parallel/FileSync 0.45
90 TestFunctional/parallel/CertSync 2.62
94 TestFunctional/parallel/NodeLabels 0.07
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
98 TestFunctional/parallel/License 0.58
99 TestFunctional/parallel/Version/short 0.11
100 TestFunctional/parallel/Version/components 0.83
101 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
102 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
103 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
104 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
105 TestFunctional/parallel/ImageCommands/ImageBuild 4.9
106 TestFunctional/parallel/ImageCommands/Setup 2.63
107 TestFunctional/parallel/DockerEnv/bash 1.95
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.41
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.43
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.38
113 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.16
114 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.75
115 TestFunctional/parallel/ImageCommands/ImageRemove 0.78
116 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.31
117 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.08
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.15
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.67
129 TestFunctional/parallel/ProfileCmd/profile_list 0.61
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
131 TestFunctional/parallel/MountCmd/any-port 9.81
132 TestFunctional/parallel/MountCmd/specific-port 2.37
133 TestFunctional/delete_addon-resizer_images 0.15
134 TestFunctional/delete_my-image_image 0.06
135 TestFunctional/delete_minikube_cached_images 0.06
145 TestJSONOutput/start/Command 44.36
146 TestJSONOutput/start/Audit 0
148 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
149 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
151 TestJSONOutput/pause/Command 0.74
152 TestJSONOutput/pause/Audit 0
154 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/unpause/Command 0.62
158 TestJSONOutput/unpause/Audit 0
160 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/stop/Command 12.27
164 TestJSONOutput/stop/Audit 0
166 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
168 TestErrorJSONOutput 0.74
170 TestKicCustomNetwork/create_custom_network 29.69
171 TestKicCustomNetwork/use_default_bridge_network 30.35
172 TestKicExistingNetwork 29.84
173 TestKicCustomSubnet 30.16
174 TestMainNoArgs 0.08
175 TestMinikubeProfile 63.76
178 TestMountStart/serial/StartWithMountFirst 7.64
179 TestMountStart/serial/VerifyMountFirst 0.4
180 TestMountStart/serial/StartWithMountSecond 7.56
181 TestMountStart/serial/VerifyMountSecond 0.4
182 TestMountStart/serial/DeleteFirst 2.15
183 TestMountStart/serial/VerifyMountPostDelete 0.4
184 TestMountStart/serial/Stop 1.57
185 TestMountStart/serial/RestartStopped 5.17
186 TestMountStart/serial/VerifyMountPostStop 0.4
189 TestMultiNode/serial/FreshStart2Nodes 84.96
190 TestMultiNode/serial/DeployApp2Nodes 5.02
191 TestMultiNode/serial/PingHostFrom2Pods 0.91
192 TestMultiNode/serial/AddNode 25.65
193 TestMultiNode/serial/ProfileList 0.44
194 TestMultiNode/serial/CopyFile 14.99
195 TestMultiNode/serial/StopNode 13.83
196 TestMultiNode/serial/StartAfterStop 22.44
197 TestMultiNode/serial/RestartKeepsNodes 138.61
198 TestMultiNode/serial/DeleteNode 16.89
199 TestMultiNode/serial/StopMultiNode 24.9
200 TestMultiNode/serial/RestartMultiNode 78.25
201 TestMultiNode/serial/ValidateNameConflict 32.89
205 TestPreload 138.86
207 TestScheduledStopUnix 101.76
208 TestSkaffold 63.23
210 TestInsufficientStorage 13.6
226 TestStoppedBinaryUpgrade/Setup 0.49
228 TestStoppedBinaryUpgrade/MinikubeLogs 3.59
237 TestPause/serial/Start 52.5
238 TestPause/serial/SecondStartNoReconfiguration 53.51
239 TestPause/serial/Pause 0.75
240 TestPause/serial/VerifyStatus 0.44
241 TestPause/serial/Unpause 0.74
242 TestPause/serial/PauseAgain 0.8
243 TestPause/serial/DeletePaused 2.69
244 TestPause/serial/VerifyDeletedResources 0.58
246 TestNoKubernetes/serial/StartNoK8sWithVersion 0.36
247 TestNoKubernetes/serial/StartWithK8s 29.57
248 TestNoKubernetes/serial/StartWithStopK8s 9.34
249 TestNoKubernetes/serial/Start 6.76
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
251 TestNoKubernetes/serial/ProfileList 36.03
252 TestNoKubernetes/serial/Stop 1.6
253 TestNoKubernetes/serial/StartNoArgs 4.18
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
255 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.03
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 9.01
257 TestNetworkPlugins/group/auto/Start 53.69
258 TestNetworkPlugins/group/kindnet/Start 63.91
259 TestNetworkPlugins/group/auto/KubeletFlags 0.41
260 TestNetworkPlugins/group/auto/NetCatPod 12.2
261 TestNetworkPlugins/group/auto/DNS 0.13
262 TestNetworkPlugins/group/auto/Localhost 0.11
263 TestNetworkPlugins/group/auto/HairPin 5.13
264 TestNetworkPlugins/group/cilium/Start 75.73
265 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
266 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
267 TestNetworkPlugins/group/kindnet/NetCatPod 13.24
268 TestNetworkPlugins/group/kindnet/DNS 0.14
269 TestNetworkPlugins/group/kindnet/Localhost 0.13
270 TestNetworkPlugins/group/kindnet/HairPin 0.11
271 TestNetworkPlugins/group/calico/Start 325.49
272 TestNetworkPlugins/group/cilium/ControllerPod 5.02
273 TestNetworkPlugins/group/cilium/KubeletFlags 0.48
274 TestNetworkPlugins/group/cilium/NetCatPod 14.76
275 TestNetworkPlugins/group/cilium/DNS 0.14
276 TestNetworkPlugins/group/cilium/Localhost 0.14
277 TestNetworkPlugins/group/cilium/HairPin 0.14
278 TestNetworkPlugins/group/false/Start 82.53
279 TestNetworkPlugins/group/false/KubeletFlags 0.41
280 TestNetworkPlugins/group/false/NetCatPod 12.33
281 TestNetworkPlugins/group/false/DNS 0.12
282 TestNetworkPlugins/group/false/Localhost 0.12
283 TestNetworkPlugins/group/false/HairPin 5.11
284 TestNetworkPlugins/group/bridge/Start 46.47
285 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
286 TestNetworkPlugins/group/bridge/NetCatPod 12.25
287 TestNetworkPlugins/group/bridge/DNS 0.12
288 TestNetworkPlugins/group/bridge/Localhost 0.12
289 TestNetworkPlugins/group/bridge/HairPin 0.12
290 TestNetworkPlugins/group/enable-default-cni/Start 45.07
291 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
292 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.18
293 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
294 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
295 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
296 TestNetworkPlugins/group/kubenet/Start 44.35
297 TestNetworkPlugins/group/calico/ControllerPod 5.02
298 TestNetworkPlugins/group/calico/KubeletFlags 0.42
299 TestNetworkPlugins/group/calico/NetCatPod 13.21
300 TestNetworkPlugins/group/kubenet/KubeletFlags 0.42
301 TestNetworkPlugins/group/kubenet/NetCatPod 12.21
302 TestNetworkPlugins/group/calico/DNS 0.21
303 TestNetworkPlugins/group/calico/Localhost 0.15
304 TestNetworkPlugins/group/calico/HairPin 0.15
307 TestNetworkPlugins/group/kubenet/DNS 0.15
308 TestNetworkPlugins/group/kubenet/Localhost 0.16
311 TestStartStop/group/no-preload/serial/FirstStart 52.98
312 TestStartStop/group/no-preload/serial/DeployApp 10.29
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
314 TestStartStop/group/no-preload/serial/Stop 12.5
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.36
316 TestStartStop/group/no-preload/serial/SecondStart 300.78
319 TestStartStop/group/old-k8s-version/serial/Stop 1.6
320 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.37
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 21.02
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.43
325 TestStartStop/group/no-preload/serial/Pause 3.37
327 TestStartStop/group/embed-certs/serial/FirstStart 52.68
328 TestStartStop/group/embed-certs/serial/DeployApp 9.28
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
330 TestStartStop/group/embed-certs/serial/Stop 12.52
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.37
332 TestStartStop/group/embed-certs/serial/SecondStart 304.07
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 17.02
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.44
337 TestStartStop/group/embed-certs/serial/Pause 3.33
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.35
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.81
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.37
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.42
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 298.92
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.33
350 TestStartStop/group/newest-cni/serial/FirstStart 43.3
351 TestStartStop/group/newest-cni/serial/DeployApp 0
352 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.81
353 TestStartStop/group/newest-cni/serial/Stop 12.51
354 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
355 TestStartStop/group/newest-cni/serial/SecondStart 18.28
356 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.47
359 TestStartStop/group/newest-cni/serial/Pause 3.3
x
+
TestDownloadOnly/v1.16.0/json-events (11.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-154410 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-154410 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (11.897153224s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-154410
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-154410: exit status 85 (293.56357ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-154410 | jenkins | v1.27.1 | 01 Nov 22 15:44 PDT |          |
	|         | -p download-only-154410        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 15:44:10
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 15:44:10.753375    3415 out.go:296] Setting OutFile to fd 1 ...
	I1101 15:44:10.753623    3415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:44:10.753632    3415 out.go:309] Setting ErrFile to fd 2...
	I1101 15:44:10.753637    3415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:44:10.753755    3415 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	W1101 15:44:10.753859    3415 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15232-2108/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15232-2108/.minikube/config/config.json: no such file or directory
	I1101 15:44:10.754624    3415 out.go:303] Setting JSON to true
	I1101 15:44:10.773677    3415 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":825,"bootTime":1667341825,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 15:44:10.773764    3415 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 15:44:10.798626    3415 out.go:97] [download-only-154410] minikube v1.27.1 on Darwin 13.0
	I1101 15:44:10.798869    3415 notify.go:220] Checking for updates...
	W1101 15:44:10.798942    3415 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 15:44:10.819817    3415 out.go:169] MINIKUBE_LOCATION=15232
	I1101 15:44:10.843100    3415 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 15:44:10.890921    3415 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 15:44:10.912932    3415 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 15:44:10.934992    3415 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	W1101 15:44:10.979004    3415 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 15:44:10.979365    3415 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 15:44:11.040350    3415 docker.go:137] docker version: linux-20.10.20
	I1101 15:44:11.040486    3415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 15:44:11.186015    3415 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-01 22:44:11.103392048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 15:44:11.207815    3415 out.go:97] Using the docker driver based on user configuration
	I1101 15:44:11.207912    3415 start.go:282] selected driver: docker
	I1101 15:44:11.207925    3415 start.go:808] validating driver "docker" against <nil>
	I1101 15:44:11.208182    3415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 15:44:11.351371    3415 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-01 22:44:11.271682694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 15:44:11.351476    3415 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1101 15:44:11.355454    3415 start_flags.go:384] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I1101 15:44:11.355582    3415 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 15:44:11.376944    3415 out.go:169] Using Docker Desktop driver with root privileges
	I1101 15:44:11.398914    3415 cni.go:95] Creating CNI manager for ""
	I1101 15:44:11.398948    3415 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1101 15:44:11.398977    3415 start_flags.go:317] config:
	{Name:download-only-154410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-154410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 15:44:11.420883    3415 out.go:97] Starting control plane node download-only-154410 in cluster download-only-154410
	I1101 15:44:11.420943    3415 cache.go:120] Beginning downloading kic base image for docker with docker
	I1101 15:44:11.443082    3415 out.go:97] Pulling base image ...
	I1101 15:44:11.443135    3415 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 15:44:11.443223    3415 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 15:44:11.499701    3415 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1101 15:44:11.499988    3415 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1101 15:44:11.500128    3415 image.go:120] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1101 15:44:11.500796    3415 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1101 15:44:11.500805    3415 cache.go:57] Caching tarball of preloaded images
	I1101 15:44:11.500954    3415 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 15:44:11.524041    3415 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1101 15:44:11.524124    3415 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1101 15:44:11.608407    3415 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1101 15:44:13.984315    3415 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1101 15:44:13.984494    3415 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1101 15:44:14.526900    3415 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1101 15:44:14.527120    3415 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/download-only-154410/config.json ...
	I1101 15:44:14.527153    3415 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/download-only-154410/config.json: {Name:mk95f2e691c8bb18178be316ac73f2e2eb93961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 15:44:14.527513    3415 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1101 15:44:14.528029    3415 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-154410"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (4.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-154410 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-154410 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker : (4.487627632s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (4.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-154410
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-154410: exit status 85 (310.043345ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-154410 | jenkins | v1.27.1 | 01 Nov 22 15:44 PDT |          |
	|         | -p download-only-154410        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-154410 | jenkins | v1.27.1 | 01 Nov 22 15:44 PDT |          |
	|         | -p download-only-154410        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 15:44:22
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 15:44:22.945337    3458 out.go:296] Setting OutFile to fd 1 ...
	I1101 15:44:22.945528    3458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:44:22.945533    3458 out.go:309] Setting ErrFile to fd 2...
	I1101 15:44:22.945537    3458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:44:22.945645    3458 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	W1101 15:44:22.945734    3458 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15232-2108/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15232-2108/.minikube/config/config.json: no such file or directory
	I1101 15:44:22.946107    3458 out.go:303] Setting JSON to true
	I1101 15:44:22.964379    3458 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":837,"bootTime":1667341825,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 15:44:22.964466    3458 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 15:44:22.985847    3458 out.go:97] [download-only-154410] minikube v1.27.1 on Darwin 13.0
	I1101 15:44:22.986100    3458 notify.go:220] Checking for updates...
	I1101 15:44:23.007698    3458 out.go:169] MINIKUBE_LOCATION=15232
	I1101 15:44:23.028975    3458 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 15:44:23.050922    3458 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 15:44:23.073187    3458 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 15:44:23.095144    3458 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-154410"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.68s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.68s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-154410
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (18.76s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-154429 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-154429 --force --alsologtostderr --driver=docker : (17.675228666s)
helpers_test.go:175: Cleaning up "download-docker-154429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-154429
--- PASS: TestDownloadOnlyKic (18.76s)

                                                
                                    
x
+
TestBinaryMirror (1.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-154447 --alsologtostderr --binary-mirror http://127.0.0.1:49449 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-154447 --alsologtostderr --binary-mirror http://127.0.0.1:49449 --driver=docker : (1.031059844s)
helpers_test.go:175: Cleaning up "binary-mirror-154447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-154447
--- PASS: TestBinaryMirror (1.65s)

                                                
                                    
x
+
TestOffline (57.1s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-161858 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-161858 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (54.314011132s)
helpers_test.go:175: Cleaning up "offline-docker-161858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-161858

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-161858: (2.785998956s)
--- PASS: TestOffline (57.10s)

                                                
                                    
x
+
TestAddons/Setup (149.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-154449 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-154449 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m29.750431073s)
--- PASS: TestAddons/Setup (149.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 3.669189ms
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-769cd898cd-ssdg9" [263e8be3-6aed-4fa6-bd4d-3b000e6ad177] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014725429s
addons_test.go:368: (dbg) Run:  kubectl --context addons-154449 top pods -n kube-system
addons_test.go:385: (dbg) Run:  out/minikube-darwin-amd64 -p addons-154449 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.76s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 3.42247ms
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-8kv52" [97b4a068-e101-4d87-97fb-00127ee7c96b] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.011396528s
addons_test.go:426: (dbg) Run:  kubectl --context addons-154449 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-154449 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.423576271s)
addons_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p addons-154449 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 4.195168ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-154449 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-154449 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:527: (dbg) Run:  kubectl --context addons-154449 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [11724b30-9d75-427f-bf05-e9eceae8f79e] Pending
helpers_test.go:342: "task-pv-pod" [11724b30-9d75-427f-bf05-e9eceae8f79e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [11724b30-9d75-427f-bf05-e9eceae8f79e] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.010860816s
addons_test.go:537: (dbg) Run:  kubectl --context addons-154449 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-154449 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-154449 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:547: (dbg) Run:  kubectl --context addons-154449 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:553: (dbg) Run:  kubectl --context addons-154449 delete pvc hpvc
addons_test.go:559: (dbg) Run:  kubectl --context addons-154449 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-154449 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-154449 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [f8bdd0b0-c321-4192-a812-92e3d9d303ae] Pending
helpers_test.go:342: "task-pv-pod-restore" [f8bdd0b0-c321-4192-a812-92e3d9d303ae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [f8bdd0b0-c321-4192-a812-92e3d9d303ae] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.010619263s
addons_test.go:579: (dbg) Run:  kubectl --context addons-154449 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Run:  kubectl --context addons-154449 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-154449 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-darwin-amd64 -p addons-154449 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-darwin-amd64 -p addons-154449 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.822529654s)
addons_test.go:595: (dbg) Run:  out/minikube-darwin-amd64 -p addons-154449 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-154449 --alsologtostderr -v=1
addons_test.go:738: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-154449 --alsologtostderr -v=1: (1.338664692s)
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-jvhbv" [e9807add-a482-431b-9656-befeb9c30055] Pending
helpers_test.go:342: "headlamp-5f4cf474d8-jvhbv" [e9807add-a482-431b-9656-befeb9c30055] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-jvhbv" [e9807add-a482-431b-9656-befeb9c30055] Running
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.008694379s
--- PASS: TestAddons/parallel/Headlamp (10.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-jjz2m" [e19c7d60-25d3-4de3-b362-40314efa1f9b] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00877637s
addons_test.go:762: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-154449
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (16.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-154449 create -f testdata/busybox.yaml
addons_test.go:613: (dbg) Run:  kubectl --context addons-154449 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f6726ce6-3ba7-42e2-aabe-da27d24710d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [f6726ce6-3ba7-42e2-aabe-da27d24710d0] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.009276134s
addons_test.go:625: (dbg) Run:  kubectl --context addons-154449 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-154449 describe sa gcp-auth-test
addons_test.go:651: (dbg) Run:  kubectl --context addons-154449 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:675: (dbg) Run:  kubectl --context addons-154449 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-darwin-amd64 -p addons-154449 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-darwin-amd64 -p addons-154449 addons disable gcp-auth --alsologtostderr -v=1: (6.623099583s)
--- PASS: TestAddons/serial/GCPAuth (16.22s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.97s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-154449
addons_test.go:135: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-154449: (12.547052102s)
addons_test.go:139: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-154449
addons_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-154449
--- PASS: TestAddons/StoppedEnableDisable (12.97s)

                                                
                                    
x
+
TestCertOptions (31.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-162953 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-162953 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (27.326508933s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-162953 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-162953 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-162953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-162953
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-162953: (2.783071192s)
--- PASS: TestCertOptions (31.03s)

                                                
                                    
x
+
TestCertExpiration (238.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-162646 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-162646 --memory=2048 --cert-expiration=3m --driver=docker : (28.565316419s)
E1101 16:27:19.485587    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 16:27:44.316221    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 16:28:32.022588    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-162646 --memory=2048 --cert-expiration=8760h --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-162646 --memory=2048 --cert-expiration=8760h --driver=docker : (27.180250063s)
helpers_test.go:175: Cleaning up "cert-expiration-162646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-162646
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-162646: (2.66432002s)
--- PASS: TestCertExpiration (238.41s)

                                                
                                    
x
+
TestDockerFlags (30.83s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-162922 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-162922 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (27.323571449s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-162922 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-162922 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-162922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-162922
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-162922: (2.657652527s)
--- PASS: TestDockerFlags (30.83s)

                                                
                                    
x
+
TestForceSystemdFlag (31.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-162516 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
E1101 16:25:22.537057    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-162516 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (28.499312191s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-162516 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-162516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-162516
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-162516: (2.6195383s)
--- PASS: TestForceSystemdFlag (31.60s)

                                                
                                    
x
+
TestForceSystemdEnv (30.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-162615 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E1101 16:26:15.877958    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-162615 --memory=2048 --alsologtostderr -v=5 --driver=docker : (27.814243972s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-162615 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-162615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-162615
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-162615: (2.683520557s)
--- PASS: TestForceSystemdEnv (30.99s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.54s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.54s)

                                                
                                    
x
+
TestErrorSpam/setup (27.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-154846 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-154846 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 --driver=docker : (27.007902033s)
--- PASS: TestErrorSpam/setup (27.01s)

                                                
                                    
x
+
TestErrorSpam/start (2.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 start --dry-run
--- PASS: TestErrorSpam/start (2.32s)

                                                
                                    
x
+
TestErrorSpam/status (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 status
--- PASS: TestErrorSpam/status (1.25s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 stop: (12.331395289s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-154846 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-154846 stop
--- PASS: TestErrorSpam/stop (13.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/test/nested/copy/3413/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154936 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-154936 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (52.379222255s)
--- PASS: TestFunctional/serial/StartWithProxy (52.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (58.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154936 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-154936 --alsologtostderr -v=8: (58.110456539s)
functional_test.go:656: soft start took 58.111054623s for "functional-154936" cluster.
--- PASS: TestFunctional/serial/SoftStart (58.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-154936 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 cache add k8s.gcr.io/pause:3.1: (2.014169057s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 cache add k8s.gcr.io/pause:3.3: (2.063479222s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 cache add k8s.gcr.io/pause:latest: (1.812850269s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-154936 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3103897512/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 cache add minikube-local-cache-test:functional-154936
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 cache add minikube-local-cache-test:functional-154936: (1.276105027s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 cache delete minikube-local-cache-test:functional-154936
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-154936
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154936 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (403.834426ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 cache reload: (1.240824663s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 kubectl -- --context functional-154936 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-154936 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.67s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154936 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 15:52:19.348011    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:19.355892    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:19.366056    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:19.387459    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:19.427657    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:19.508638    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:19.668849    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:19.989315    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:20.629500    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:21.910514    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:24.470819    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:52:29.592955    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-154936 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.603138309s)
functional_test.go:754: restart took 51.603318166s for "functional-154936" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (51.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-154936 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 logs: (2.964138757s)
--- PASS: TestFunctional/serial/LogsCmd (2.96s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd1827025458/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd1827025458/001/logs.txt: (3.071853413s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154936 config get cpus: exit status 14 (59.285784ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154936 config get cpus: exit status 14 (58.529409ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-154936 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-154936 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 5890: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154936 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-154936 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (664.360629ms)

                                                
                                                
-- stdout --
	* [functional-154936] minikube v1.27.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 15:53:47.847763    5821 out.go:296] Setting OutFile to fd 1 ...
	I1101 15:53:47.848192    5821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:53:47.848198    5821 out.go:309] Setting ErrFile to fd 2...
	I1101 15:53:47.848202    5821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:53:47.848382    5821 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 15:53:47.848869    5821 out.go:303] Setting JSON to false
	I1101 15:53:47.867903    5821 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1402,"bootTime":1667341825,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 15:53:47.868102    5821 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 15:53:47.891342    5821 out.go:177] * [functional-154936] minikube v1.27.1 on Darwin 13.0
	I1101 15:53:47.913229    5821 notify.go:220] Checking for updates...
	I1101 15:53:47.934040    5821 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 15:53:47.977025    5821 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 15:53:47.998467    5821 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 15:53:48.020094    5821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 15:53:48.041459    5821 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 15:53:48.064051    5821 config.go:180] Loaded profile config "functional-154936": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 15:53:48.064757    5821 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 15:53:48.127695    5821 docker.go:137] docker version: linux-20.10.20
	I1101 15:53:48.127842    5821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 15:53:48.272044    5821 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-01 22:53:48.197691625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 15:53:48.293023    5821 out.go:177] * Using the docker driver based on existing profile
	I1101 15:53:48.313975    5821 start.go:282] selected driver: docker
	I1101 15:53:48.313996    5821 start.go:808] validating driver "docker" against &{Name:functional-154936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-154936 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 15:53:48.314089    5821 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 15:53:48.337985    5821 out.go:177] 
	W1101 15:53:48.375132    5821 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 15:53:48.397052    5821 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154936 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154936 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-154936 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (587.426811ms)

                                                
                                                
-- stdout --
	* [functional-154936] minikube v1.27.1 sur Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 15:53:49.239606    5863 out.go:296] Setting OutFile to fd 1 ...
	I1101 15:53:49.239850    5863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:53:49.239856    5863 out.go:309] Setting ErrFile to fd 2...
	I1101 15:53:49.239859    5863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 15:53:49.239992    5863 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 15:53:49.240450    5863 out.go:303] Setting JSON to false
	I1101 15:53:49.259784    5863 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1404,"bootTime":1667341825,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1101 15:53:49.259898    5863 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1101 15:53:49.282333    5863 out.go:177] * [functional-154936] minikube v1.27.1 sur Darwin 13.0
	I1101 15:53:49.304266    5863 notify.go:220] Checking for updates...
	I1101 15:53:49.325373    5863 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 15:53:49.347078    5863 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	I1101 15:53:49.368220    5863 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1101 15:53:49.389434    5863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 15:53:49.411342    5863 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	I1101 15:53:49.433568    5863 config.go:180] Loaded profile config "functional-154936": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 15:53:49.433961    5863 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 15:53:49.496300    5863 docker.go:137] docker version: linux-20.10.20
	I1101 15:53:49.496487    5863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 15:53:49.641865    5863 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-01 22:53:49.567366194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 15:53:49.664103    5863 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1101 15:53:49.685596    5863 start.go:282] selected driver: docker
	I1101 15:53:49.685619    5863 start.go:808] validating driver "docker" against &{Name:functional-154936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-154936 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 15:53:49.685748    5863 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 15:53:49.710963    5863 out.go:177] 
	W1101 15:53:49.733028    5863 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 15:53:49.754754    5863 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (20.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-154936 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-154936 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-r9hfc" [b95bb8ad-4a48-4313-990f-681801502263] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-r9hfc" [b95bb8ad-4a48-4313-990f-681801502263] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 14.00868823s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 service --namespace=default --https --url hello-node
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 service --namespace=default --https --url hello-node: (2.027089225s)
functional_test.go:1476: found endpoint: https://127.0.0.1:50297
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 service hello-node --url --format={{.IP}}
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 service hello-node --url --format={{.IP}}: (2.027186064s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 service hello-node --url: (2.025052676s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:50313
--- PASS: TestFunctional/parallel/ServiceCmd (20.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [bc2c30c2-4aad-44fd-8db3-1bc893caea71] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009991537s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-154936 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-154936 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-154936 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-154936 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [8ce1315a-30b1-4bf6-afad-915a2831cbc4] Pending
helpers_test.go:342: "sp-pod" [8ce1315a-30b1-4bf6-afad-915a2831cbc4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [8ce1315a-30b1-4bf6-afad-915a2831cbc4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.008850568s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-154936 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-154936 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-154936 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [58e92373-b129-4b39-ae8d-d2563ed40831] Pending
helpers_test.go:342: "sp-pod" [58e92373-b129-4b39-ae8d-d2563ed40831] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [58e92373-b129-4b39-ae8d-d2563ed40831] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.031612424s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-154936 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh -n functional-154936 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 cp functional-154936:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd1546248515/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh -n functional-154936 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-154936 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-bzvjk" [d8f86139-374e-439e-86ea-b8c51905f4b9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-bzvjk" [d8f86139-374e-439e-86ea-b8c51905f4b9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.014470291s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-154936 exec mysql-596b7fcdbf-bzvjk -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-154936 exec mysql-596b7fcdbf-bzvjk -- mysql -ppassword -e "show databases;": exit status 1 (182.787237ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-154936 exec mysql-596b7fcdbf-bzvjk -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-154936 exec mysql-596b7fcdbf-bzvjk -- mysql -ppassword -e "show databases;": exit status 1 (140.336117ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-154936 exec mysql-596b7fcdbf-bzvjk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/3413/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo cat /etc/test/nested/copy/3413/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/3413.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo cat /etc/ssl/certs/3413.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/3413.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo cat /usr/share/ca-certificates/3413.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/34132.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo cat /etc/ssl/certs/34132.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/34132.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo cat /usr/share/ca-certificates/34132.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-154936 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154936 ssh "sudo systemctl is-active crio": exit status 1 (531.061849ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-154936 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-154936
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-154936
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-154936 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-154936 | de46719c5dc95 | 30B    |
| docker.io/library/nginx                     | alpine            | b997307a58ab5 | 23.6MB |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| docker.io/library/nginx                     | latest            | 76c69feac34e8 | 142MB  |
| docker.io/library/mysql                     | 5.7               | 14905234a4ed4 | 495MB  |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| gcr.io/google-containers/addon-resizer      | functional-154936 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-154936 image ls --format json:
[{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23600000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0
c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"de46719c5dc95a7b37fdcd196a6c71176b8f3ba313f95932bae22f4251ef813d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-154936"],"size":"30"},{"id":"14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"495000000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resiz
er:functional-154936"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-154936 image ls --format yaml:
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: de46719c5dc95a7b37fdcd196a6c71176b8f3ba313f95932bae22f4251ef813d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-154936
size: "30"
- id: 76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "495000000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23600000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-154936
size: "32900000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154936 ssh pgrep buildkitd: exit status 1 (424.864637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image build -t localhost/my-image:functional-154936 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 image build -t localhost/my-image:functional-154936 testdata/build: (4.101915239s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-154936 image build -t localhost/my-image:functional-154936 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 3b72d17e5a4b
Removing intermediate container 3b72d17e5a4b
---> 8693c7e971f7
Step 3/3 : ADD content.txt /
---> a6cf93150223
Successfully built a6cf93150223
Successfully tagged localhost/my-image:functional-154936
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls
2022/11/01 15:54:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.567058719s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-154936
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-154936 docker-env) && out/minikube-darwin-amd64 status -p functional-154936"
E1101 15:52:39.832872    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-154936 docker-env) && out/minikube-darwin-amd64 status -p functional-154936": (1.235592703s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-154936 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image load --daemon gcr.io/google-containers/addon-resizer:functional-154936

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 image load --daemon gcr.io/google-containers/addon-resizer:functional-154936: (3.0907651s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image load --daemon gcr.io/google-containers/addon-resizer:functional-154936

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 image load --daemon gcr.io/google-containers/addon-resizer:functional-154936: (2.070401171s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.510011726s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-154936
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image load --daemon gcr.io/google-containers/addon-resizer:functional-154936
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 image load --daemon gcr.io/google-containers/addon-resizer:functional-154936: (4.260333608s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image save gcr.io/google-containers/addon-resizer:functional-154936 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 image save gcr.io/google-containers/addon-resizer:functional-154936 /Users/jenkins/workspace/addon-resizer-save.tar: (1.753042053s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image rm gcr.io/google-containers/addon-resizer:functional-154936
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.93497738s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-154936
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 image save --daemon gcr.io/google-containers/addon-resizer:functional-154936
E1101 15:53:00.314180    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-154936 image save --daemon gcr.io/google-containers/addon-resizer:functional-154936: (2.95838432s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-154936
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-154936 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-154936 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [f7c506a8-d274-45e5-ba6c-849079a39e1c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [f7c506a8-d274-45e5-ba6c-849079a39e1c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.009237298s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-154936 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-154936 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 5488: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "521.892124ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "86.77698ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "473.496895ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "81.018727ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-154936 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3037396058/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1667343215621816000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3037396058/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1667343215621816000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3037396058/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1667343215621816000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3037396058/001/test-1667343215621816000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154936 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (420.723864ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 22:53 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 22:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 22:53 test-1667343215621816000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh cat /mount-9p/test-1667343215621816000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-154936 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [048c7eaa-d736-47fe-9ec1-bba0b9ab08aa] Pending
helpers_test.go:342: "busybox-mount" [048c7eaa-d736-47fe-9ec1-bba0b9ab08aa] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1101 15:53:41.273484    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [048c7eaa-d736-47fe-9ec1-bba0b9ab08aa] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [048c7eaa-d736-47fe-9ec1-bba0b9ab08aa] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.009007318s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-154936 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-154936 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3037396058/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-154936 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2279492774/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154936 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (409.672063ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-154936 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2279492774/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154936 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154936 ssh "sudo umount -f /mount-9p": exit status 1 (386.183ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-154936 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-154936 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2279492774/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-154936
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-154936
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-154936
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-160123 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-160123 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (44.358862881s)
--- PASS: TestJSONOutput/start/Command (44.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-160123 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-160123 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.27s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-160123 --output=json --user=testUser
E1101 16:02:19.389953    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-160123 --output=json --user=testUser: (12.270951339s)
--- PASS: TestJSONOutput/stop/Command (12.27s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.74s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-160223 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-160223 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (337.751841ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e4147a75-986f-446a-9627-63a7e825b404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-160223] minikube v1.27.1 on Darwin 13.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1cf5cb85-0202-47b7-9b59-00227635f433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15232"}}
	{"specversion":"1.0","id":"e72d482c-b1d2-40e8-9efe-86fbb4241077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig"}}
	{"specversion":"1.0","id":"a881f37a-baeb-415d-b55c-6726899a3be3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"066e22da-d302-43a2-9d95-533840dfd381","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5b972c16-371f-4921-a407-5add104900d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube"}}
	{"specversion":"1.0","id":"aafe68d9-e635-4409-b771-096d7e9a2b34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-160223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-160223
--- PASS: TestErrorJSONOutput (0.74s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-160224 --network=
E1101 16:02:44.220277    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-160224 --network=: (27.049743473s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-160224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-160224
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-160224: (2.58231322s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.69s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-160253 --network=bridge
E1101 16:03:11.913179    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-160253 --network=bridge: (27.845789877s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-160253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-160253
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-160253: (2.443146801s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.35s)

                                                
                                    
x
+
TestKicExistingNetwork (29.84s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-160324 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-160324 --network=existing-network: (27.057935753s)
helpers_test.go:175: Cleaning up "existing-network-160324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-160324
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-160324: (2.424240353s)
--- PASS: TestKicExistingNetwork (29.84s)

                                                
                                    
x
+
TestKicCustomSubnet (30.16s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-160354 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-160354 --subnet=192.168.60.0/24: (27.449681357s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-160354 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-160354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-160354
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-160354: (2.62787698s)
--- PASS: TestKicCustomSubnet (30.16s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (63.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-160424 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-160424 --driver=docker : (27.981944356s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-160424 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-160424 --driver=docker : (28.713049869s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-160424
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-160424
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-160424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-160424
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-160424: (2.60298092s)
helpers_test.go:175: Cleaning up "first-160424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-160424
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-160424: (2.617314888s)
--- PASS: TestMinikubeProfile (63.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-160528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-160528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.640794867s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-160528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-160528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-160528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.553098032s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-160528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.15s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-160528 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-160528 --alsologtostderr -v=5: (2.154773996s)
--- PASS: TestMountStart/serial/DeleteFirst (2.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-160528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-160528
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-160528: (1.570284744s)
--- PASS: TestMountStart/serial/Stop (1.57s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-160528
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-160528: (4.172194099s)
--- PASS: TestMountStart/serial/RestartStopped (5.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-160528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-160556 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1101 16:07:19.383256    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-160556 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m24.256263909s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-160556 -- rollout status deployment/busybox: (3.260402459s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-cfccc -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-lgnhn -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-cfccc -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-lgnhn -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-cfccc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-lgnhn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-cfccc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-cfccc -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-lgnhn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-160556 -- exec busybox-65db55d5d6-lgnhn -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-160556 -v 3 --alsologtostderr
E1101 16:07:44.213587    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-160556 -v 3 --alsologtostderr: (24.634802913s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-160556 status --alsologtostderr: (1.010787227s)
--- PASS: TestMultiNode/serial/AddNode (25.65s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp testdata/cp-test.txt multinode-160556:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp multinode-160556:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile4229391827/001/cp-test_multinode-160556.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp multinode-160556:/home/docker/cp-test.txt multinode-160556-m02:/home/docker/cp-test_multinode-160556_multinode-160556-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m02 "sudo cat /home/docker/cp-test_multinode-160556_multinode-160556-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp multinode-160556:/home/docker/cp-test.txt multinode-160556-m03:/home/docker/cp-test_multinode-160556_multinode-160556-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m03 "sudo cat /home/docker/cp-test_multinode-160556_multinode-160556-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp testdata/cp-test.txt multinode-160556-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp multinode-160556-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile4229391827/001/cp-test_multinode-160556-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp multinode-160556-m02:/home/docker/cp-test.txt multinode-160556:/home/docker/cp-test_multinode-160556-m02_multinode-160556.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556 "sudo cat /home/docker/cp-test_multinode-160556-m02_multinode-160556.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp multinode-160556-m02:/home/docker/cp-test.txt multinode-160556-m03:/home/docker/cp-test_multinode-160556-m02_multinode-160556-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m03 "sudo cat /home/docker/cp-test_multinode-160556-m02_multinode-160556-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp testdata/cp-test.txt multinode-160556-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp multinode-160556-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile4229391827/001/cp-test_multinode-160556-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp multinode-160556-m03:/home/docker/cp-test.txt multinode-160556:/home/docker/cp-test_multinode-160556-m03_multinode-160556.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556 "sudo cat /home/docker/cp-test_multinode-160556-m03_multinode-160556.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 cp multinode-160556-m03:/home/docker/cp-test.txt multinode-160556-m02:/home/docker/cp-test_multinode-160556-m03_multinode-160556-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 ssh -n multinode-160556-m02 "sudo cat /home/docker/cp-test_multinode-160556-m03_multinode-160556-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (13.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-160556 node stop m03: (12.313445988s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-160556 status: exit status 7 (755.5811ms)

                                                
                                                
-- stdout --
	multinode-160556
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-160556-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-160556-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-160556 status --alsologtostderr: exit status 7 (764.707171ms)

                                                
                                                
-- stdout --
	multinode-160556
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-160556-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-160556-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 16:08:21.456028    9143 out.go:296] Setting OutFile to fd 1 ...
	I1101 16:08:21.456297    9143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:08:21.456302    9143 out.go:309] Setting ErrFile to fd 2...
	I1101 16:08:21.456306    9143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:08:21.456446    9143 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 16:08:21.456659    9143 out.go:303] Setting JSON to false
	I1101 16:08:21.456683    9143 mustload.go:65] Loading cluster: multinode-160556
	I1101 16:08:21.456727    9143 notify.go:220] Checking for updates...
	I1101 16:08:21.457018    9143 config.go:180] Loaded profile config "multinode-160556": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 16:08:21.457031    9143 status.go:255] checking status of multinode-160556 ...
	I1101 16:08:21.457481    9143 cli_runner.go:164] Run: docker container inspect multinode-160556 --format={{.State.Status}}
	I1101 16:08:21.517304    9143 status.go:330] multinode-160556 host status = "Running" (err=<nil>)
	I1101 16:08:21.517334    9143 host.go:66] Checking if "multinode-160556" exists ...
	I1101 16:08:21.517611    9143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-160556
	I1101 16:08:21.577260    9143 host.go:66] Checking if "multinode-160556" exists ...
	I1101 16:08:21.577559    9143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 16:08:21.577638    9143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-160556
	I1101 16:08:21.639063    9143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51027 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/multinode-160556/id_rsa Username:docker}
	I1101 16:08:21.723430    9143 ssh_runner.go:195] Run: systemctl --version
	I1101 16:08:21.727984    9143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 16:08:21.737244    9143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-160556
	I1101 16:08:21.798682    9143 kubeconfig.go:92] found "multinode-160556" server: "https://127.0.0.1:51026"
	I1101 16:08:21.798709    9143 api_server.go:165] Checking apiserver status ...
	I1101 16:08:21.798748    9143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 16:08:21.810125    9143 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1726/cgroup
	W1101 16:08:21.819804    9143 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1726/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 16:08:21.819866    9143 ssh_runner.go:195] Run: ls
	I1101 16:08:21.824209    9143 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51026/healthz ...
	I1101 16:08:21.830249    9143 api_server.go:278] https://127.0.0.1:51026/healthz returned 200:
	ok
	I1101 16:08:21.830274    9143 status.go:421] multinode-160556 apiserver status = Running (err=<nil>)
	I1101 16:08:21.830286    9143 status.go:257] multinode-160556 status: &{Name:multinode-160556 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 16:08:21.830305    9143 status.go:255] checking status of multinode-160556-m02 ...
	I1101 16:08:21.830603    9143 cli_runner.go:164] Run: docker container inspect multinode-160556-m02 --format={{.State.Status}}
	I1101 16:08:21.892388    9143 status.go:330] multinode-160556-m02 host status = "Running" (err=<nil>)
	I1101 16:08:21.892410    9143 host.go:66] Checking if "multinode-160556-m02" exists ...
	I1101 16:08:21.892671    9143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-160556-m02
	I1101 16:08:21.951701    9143 host.go:66] Checking if "multinode-160556-m02" exists ...
	I1101 16:08:21.951992    9143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 16:08:21.952051    9143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-160556-m02
	I1101 16:08:22.012408    9143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51088 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/multinode-160556-m02/id_rsa Username:docker}
	I1101 16:08:22.095026    9143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 16:08:22.104071    9143 status.go:257] multinode-160556-m02 status: &{Name:multinode-160556-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 16:08:22.104099    9143 status.go:255] checking status of multinode-160556-m03 ...
	I1101 16:08:22.104376    9143 cli_runner.go:164] Run: docker container inspect multinode-160556-m03 --format={{.State.Status}}
	I1101 16:08:22.163730    9143 status.go:330] multinode-160556-m03 host status = "Stopped" (err=<nil>)
	I1101 16:08:22.163765    9143 status.go:343] host is not running, skipping remaining checks
	I1101 16:08:22.163771    9143 status.go:257] multinode-160556-m03 status: &{Name:multinode-160556-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (13.83s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (22.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 node start m03 --alsologtostderr
E1101 16:08:42.431666    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-160556 node start m03 --alsologtostderr: (21.335725245s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (22.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (138.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-160556
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-160556
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-160556: (36.621693329s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-160556 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-160556 --wait=true -v=8 --alsologtostderr: (1m41.87281981s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-160556
--- PASS: TestMultiNode/serial/RestartKeepsNodes (138.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (16.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-160556 node delete m03: (16.018838866s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (16.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-160556 stop: (24.562078686s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-160556 status: exit status 7 (167.333563ms)

                                                
                                                
-- stdout --
	multinode-160556
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-160556-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-160556 status --alsologtostderr: exit status 7 (168.866417ms)

                                                
                                                
-- stdout --
	multinode-160556
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-160556-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 16:11:44.884610    9787 out.go:296] Setting OutFile to fd 1 ...
	I1101 16:11:44.884870    9787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:11:44.884876    9787 out.go:309] Setting ErrFile to fd 2...
	I1101 16:11:44.884879    9787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 16:11:44.884994    9787 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
	I1101 16:11:44.885201    9787 out.go:303] Setting JSON to false
	I1101 16:11:44.885226    9787 mustload.go:65] Loading cluster: multinode-160556
	I1101 16:11:44.885272    9787 notify.go:220] Checking for updates...
	I1101 16:11:44.885582    9787 config.go:180] Loaded profile config "multinode-160556": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1101 16:11:44.885593    9787 status.go:255] checking status of multinode-160556 ...
	I1101 16:11:44.885995    9787 cli_runner.go:164] Run: docker container inspect multinode-160556 --format={{.State.Status}}
	I1101 16:11:44.943204    9787 status.go:330] multinode-160556 host status = "Stopped" (err=<nil>)
	I1101 16:11:44.943220    9787 status.go:343] host is not running, skipping remaining checks
	I1101 16:11:44.943226    9787 status.go:257] multinode-160556 status: &{Name:multinode-160556 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 16:11:44.943248    9787 status.go:255] checking status of multinode-160556-m02 ...
	I1101 16:11:44.943511    9787 cli_runner.go:164] Run: docker container inspect multinode-160556-m02 --format={{.State.Status}}
	I1101 16:11:44.999103    9787 status.go:330] multinode-160556-m02 host status = "Stopped" (err=<nil>)
	I1101 16:11:44.999124    9787 status.go:343] host is not running, skipping remaining checks
	I1101 16:11:44.999132    9787 status.go:257] multinode-160556-m02 status: &{Name:multinode-160556-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-160556 --wait=true -v=8 --alsologtostderr --driver=docker 
E1101 16:12:19.378990    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 16:12:44.208071    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-160556 --wait=true -v=8 --alsologtostderr --driver=docker : (1m17.327937797s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-160556 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-160556
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-160556-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-160556-m02 --driver=docker : exit status 14 (415.332072ms)

                                                
                                                
-- stdout --
	* [multinode-160556-m02] minikube v1.27.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-160556-m02' is duplicated with machine name 'multinode-160556-m02' in profile 'multinode-160556'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-160556-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-160556-m03 --driver=docker : (29.28574069s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-160556
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-160556: exit status 80 (483.157571ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-160556
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-160556-m03 already exists in multinode-160556-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-160556-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-160556-m03: (2.641086451s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.89s)

                                                
                                    
x
+
TestPreload (138.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-161341 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E1101 16:14:07.260072    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-161341 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (55.811139576s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-161341 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-161341 -- docker pull gcr.io/k8s-minikube/busybox: (2.590938337s)
preload_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-161341 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6
preload_test.go:67: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-161341 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6: (1m17.229244125s)
preload_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-161341 -- docker images
helpers_test.go:175: Cleaning up "test-preload-161341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-161341
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-161341: (2.816024095s)
--- PASS: TestPreload (138.86s)

                                                
                                    
x
+
TestScheduledStopUnix (101.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-161559 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-161559 --memory=2048 --driver=docker : (27.755543106s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-161559 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-161559 -n scheduled-stop-161559
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-161559 -n scheduled-stop-161559: exit status 85 (192.3207ms)

                                                
                                                
-- stdout --
	* Profile "scheduled-stop-161559" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p scheduled-stop-161559"

                                                
                                                
-- /stdout --
scheduled_stop_test.go:191: status error: exit status 85 (may be ok)
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-161559 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-161559 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-161559 -n scheduled-stop-161559
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-161559
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-161559 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1101 16:17:19.490401    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-161559
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-161559: exit status 7 (115.761331ms)

                                                
                                                
-- stdout --
	scheduled-stop-161559
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-161559 -n scheduled-stop-161559
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-161559 -n scheduled-stop-161559: exit status 7 (111.373002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-161559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-161559
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-161559: (2.298464509s)
--- PASS: TestScheduledStopUnix (101.76s)

                                                
                                    
x
+
TestSkaffold (63.23s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe974790143 version
skaffold_test.go:63: skaffold version: v2.0.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-161741 --memory=2600 --driver=docker 
E1101 16:17:44.320702    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-161741 --memory=2600 --driver=docker : (28.261649643s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe974790143 run --minikube-profile skaffold-161741 --kube-context skaffold-161741 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe974790143 run --minikube-profile skaffold-161741 --kube-context skaffold-161741 --status-check=true --port-forward=false --interactive=false: (20.739540134s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-cfb5b5fff-xv8cj" [0ea570ad-e59f-4a50-adf6-a9b7d024cfe7] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.016329128s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-6fc9d74586-fwnpp" [a4061076-7af5-4def-81ba-88fc28e9e530] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008862109s
helpers_test.go:175: Cleaning up "skaffold-161741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-161741
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-161741: (2.948906244s)
--- PASS: TestSkaffold (63.23s)

                                                
                                    
x
+
TestInsufficientStorage (13.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-161845 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-161845 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.780119069s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba3e93f4-bfd2-4001-b42e-fcca2bf2c6f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-161845] minikube v1.27.1 on Darwin 13.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7003d5b2-6860-4249-9426-8bd6893ede6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15232"}}
	{"specversion":"1.0","id":"a8d4693f-37c3-4cdd-a7d0-ab21368ce23b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig"}}
	{"specversion":"1.0","id":"7a7f14fa-753d-4c57-9c77-cb8a89c89f19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"c4866322-d8b2-49d8-ac21-e254a85deb1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3a31ec49-6bc4-4469-99e4-912d6680da63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube"}}
	{"specversion":"1.0","id":"ea42b752-dcfc-4d59-8416-1512003392c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fa172302-3e92-4cc9-bf20-5cb71b5932b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b20b00c1-c798-49f9-ae0a-dede8d4edb3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d5289a70-d989-431b-8e0c-3893e8873831","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"b76c082f-2569-4d0d-a60d-8fea1550c61d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-161845 in cluster insufficient-storage-161845","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8090cfff-cb2a-4b3d-8aa7-91972bdde3e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"75fc9593-e7b3-4a0c-b983-ddba60c77958","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"655b25e7-1e15-457e-844b-64277647f731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-161845 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-161845 --output=json --layout=cluster: exit status 7 (397.907647ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-161845","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-161845","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:18:55.172127   11328 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-161845" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-161845 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-161845 --output=json --layout=cluster: exit status 7 (393.52871ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-161845","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-161845","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 16:18:55.566097   11338 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-161845" does not appear in /Users/jenkins/minikube-integration/15232-2108/kubeconfig
	E1101 16:18:55.574792   11338 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/insufficient-storage-161845/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-161845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-161845
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-161845: (3.02700886s)
--- PASS: TestInsufficientStorage (13.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-162013
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-162013: (3.585409658s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                    
x
+
TestPause/serial/Start (52.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-162153 --memory=2048 --install-addons=false --wait=all --driver=docker 
E1101 16:22:19.488195    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 16:22:44.318883    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-162153 --memory=2048 --install-addons=false --wait=all --driver=docker : (52.496305123s)
--- PASS: TestPause/serial/Start (52.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-162153 --alsologtostderr -v=1 --driver=docker 
E1101 16:23:32.027011    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:32.033349    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:32.045476    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:32.067417    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:32.107982    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:32.188816    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:32.349357    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:32.671555    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:33.311889    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:34.592049    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:23:37.153699    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-162153 --alsologtostderr -v=1 --driver=docker : (53.490480655s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (53.51s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-162153 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-162153 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-162153 --output=json --layout=cluster: exit status 2 (442.830615ms)

                                                
                                                
-- stdout --
	{"Name":"pause-162153","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-162153","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-162153 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-162153 --alsologtostderr -v=5
E1101 16:23:42.274366    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.69s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-162153 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-162153 --alsologtostderr -v=5: (2.686313917s)
--- PASS: TestPause/serial/DeletePaused (2.69s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-162153
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-162153: exit status 1 (56.04376ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-162153

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162346 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-162346 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (355.232827ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-162346] minikube v1.27.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15232
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162346 --driver=docker 
E1101 16:23:52.515875    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-162346 --driver=docker : (29.070173984s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-162346 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162346 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-162346 --no-kubernetes --driver=docker : (6.283918271s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-162346 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-162346 status -o json: exit status 2 (419.789548ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-162346","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-162346
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-162346: (2.632625848s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162346 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-162346 --no-kubernetes --driver=docker : (6.763123641s)
--- PASS: TestNoKubernetes/serial/Start (6.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-162346 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-162346 "sudo systemctl is-active --quiet service kubelet": exit status 1 (418.850996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (36.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (16.433440381s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
E1101 16:24:53.958221    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (19.597676132s)
--- PASS: TestNoKubernetes/serial/ProfileList (36.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-162346
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-162346: (1.600567741s)
--- PASS: TestNoKubernetes/serial/Stop (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162346 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-162346 --driver=docker : (4.176342408s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-162346 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-162346 "sudo systemctl is-active --quiet service kubelet": exit status 1 (389.666292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.27.1 on darwin
- MINIKUBE_LOCATION=15232
- KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2602921803/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2602921803/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2602921803/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2602921803/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.03s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.01s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.27.1 on darwin
- MINIKUBE_LOCATION=15232
- KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1263393360/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1263393360/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1263393360/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1263393360/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-161858 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-161858 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (53.693309289s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-161859 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
E1101 16:30:47.369901    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-161859 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (1m3.909110353s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-161858 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-161858 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-r7529" [7a6825dd-51c4-462a-96f8-add45d0ed33f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-r7529" [7a6825dd-51c4-462a-96f8-add45d0ed33f] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006828707s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-161858 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.127127026s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (75.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-161859 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-161859 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m15.729903183s)
--- PASS: TestNetworkPlugins/group/cilium/Start (75.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-2slmh" [79977059-b99b-489d-b0d7-5a60d8aae9b1] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01747308s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-161859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-161859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-txlps" [5fe6cca9-7d21-410f-8903-90105e2abc75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-txlps" [5fe6cca9-7d21-410f-8903-90105e2abc75] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.011742053s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-161859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-161859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-161859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (325.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-161859 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E1101 16:32:19.484368    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 16:32:44.314566    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-161859 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (5m25.490187998s)
--- PASS: TestNetworkPlugins/group/calico/Start (325.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-522tm" [d83a3f18-6019-4128-a1ed-77662d18026e] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014707152s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-161859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (14.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-161859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-pxgmk" [9fbff263-c849-46be-8b41-88619c18518b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-pxgmk" [9fbff263-c849-46be-8b41-88619c18518b] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 14.008499534s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (14.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-161859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-161859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-161859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (82.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-161859 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E1101 16:33:32.020644    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-161859 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m22.525875847s)
--- PASS: TestNetworkPlugins/group/false/Start (82.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-161859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-161859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-snc6d" [27639513-ec4b-45db-b80d-e6a74733ed68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-snc6d" [27639513-ec4b-45db-b80d-e6a74733ed68] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.050185522s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-161859 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-161859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-161859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-161859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.113469876s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (46.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-161858 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-161858 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (46.473918665s)
--- PASS: TestNetworkPlugins/group/bridge/Start (46.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-161858 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-161858 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-gsmh8" [3ef6227f-f4d1-4118-987a-34fe58a35293] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-gsmh8" [3ef6227f-f4d1-4118-987a-34fe58a35293] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.037695478s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-161858 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (45.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-161858 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E1101 16:36:18.480232    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:18.486547    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:18.496908    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:18.517562    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:18.558001    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:18.638326    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:18.798842    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:19.119044    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:19.759233    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:21.039436    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:23.599550    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:28.719691    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:38.959711    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:36:48.316718    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:36:48.322189    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:36:48.333522    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:36:48.353699    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:36:48.395817    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:36:48.475954    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:36:48.636478    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:36:48.956574    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-161858 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (45.069231292s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (45.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-161858 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-161858 replace --force -f testdata/netcat-deployment.yaml
E1101 16:36:49.596995    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-qgxvf" [a09751a4-9120-4318-8873-3a791c5ec83e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 16:36:50.877187    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:36:53.437659    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-qgxvf" [a09751a4-9120-4318-8873-3a791c5ec83e] Running
E1101 16:36:58.558268    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:36:59.441124    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.009867505s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-161858 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (44.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-161858 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
E1101 16:37:08.798512    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:37:19.481885    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 16:37:29.278536    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-161858 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (44.349342185s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (44.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-vtp5h" [4427e572-8f9c-460b-951f-b2936922327e] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1101 16:37:40.401107    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017921452s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-161859 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-161859 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-6b29v" [d6050835-333f-44b5-9b99-26c1404d7e9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 16:37:44.311786    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-6b29v" [d6050835-333f-44b5-9b99-26c1404d7e9f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.007870998s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-161858 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-161858 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-shcw4" [f972116f-0c69-42ae-b0df-840274112bd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-shcw4" [f972116f-0c69-42ae-b0df-840274112bd3] Running
E1101 16:37:56.920193    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.0077272s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-161859 exec deployment/netcat -- nslookup kubernetes.default
E1101 16:37:54.359703    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:37:54.364806    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:37:54.374943    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:37:54.395181    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:37:54.435394    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:37:54.516282    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-161859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1101 16:37:54.676451    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-161859 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-161858 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-161858 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-163909 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E1101 16:39:16.284208    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:39:32.159314    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:39:41.593853    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:41.598972    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:41.609053    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:41.629239    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:41.669330    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:41.749425    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:41.910079    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:42.230304    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:42.872101    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:44.152210    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:46.714455    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:51.844229    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:39:55.087857    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:40:02.095727    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-163909 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (52.982635936s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-163909 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [18b60569-3b94-4b1c-a9b4-bcbbd5bacd09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [18b60569-3b94-4b1c-a9b4-bcbbd5bacd09] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.013300709s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-163909 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-163909 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-163909 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-163909 --alsologtostderr -v=3
E1101 16:40:22.584369    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-163909 --alsologtostderr -v=3: (12.495268355s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-163909 -n no-preload-163909
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-163909 -n no-preload-163909: exit status 7 (113.446028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-163909 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-163909 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E1101 16:40:38.232649    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:40:48.874050    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:48.880438    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:48.891618    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:48.912182    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:48.952914    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:49.034064    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:49.195356    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:49.516655    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:50.156797    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:51.439192    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:53.999574    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:40:59.119901    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:41:03.546921    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:41:09.361413    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:41:18.507334    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:41:29.841655    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:41:46.191038    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:41:48.345877    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:41:49.739044    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:49.744100    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:49.754905    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:49.776965    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:49.819130    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:49.899311    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:50.059583    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:50.381757    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:51.023093    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:52.304367    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:54.865686    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:41:59.986438    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:42:02.560134    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-163909 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (5m0.325821808s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-163909 -n no-preload-163909
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-163757 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-163757 --alsologtostderr -v=3: (1.598054893s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-163757 -n old-k8s-version-163757: exit status 7 (112.793708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-163757 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-p5kq6" [2cbf8dfc-5581-4ade-8d58-63d403d93d88] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1101 16:45:33.680021    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-p5kq6" [2cbf8dfc-5581-4ade-8d58-63d403d93d88] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.01548102s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-p5kq6" [2cbf8dfc-5581-4ade-8d58-63d403d93d88] Running
E1101 16:45:48.873350    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006845721s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-163909 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-163909 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-163909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-163909 -n no-preload-163909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-163909 -n no-preload-163909: exit status 2 (466.011434ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-163909 -n no-preload-163909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-163909 -n no-preload-163909: exit status 2 (480.938059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-163909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-163909 -n no-preload-163909
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-163909 -n no-preload-163909
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-164600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3
E1101 16:46:16.560459    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:46:18.504895    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:46:48.340928    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kindnet-161859/client.crt: no such file or directory
E1101 16:46:49.736429    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-164600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (52.675295351s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-164600 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [b6a5436d-ba93-4a36-b9de-e3c0735f3abc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [b6a5436d-ba93-4a36-b9de-e3c0735f3abc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.01699195s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-164600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-164600 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-164600 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-164600 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-164600 --alsologtostderr -v=3: (12.522547929s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-164600 -n embed-certs-164600
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-164600 -n embed-certs-164600: exit status 7 (113.851889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-164600 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (304.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-164600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3
E1101 16:47:17.427647    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/enable-default-cni-161858/client.crt: no such file or directory
E1101 16:47:19.506067    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 16:47:27.391688    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 16:47:35.709623    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:47:44.336318    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 16:47:49.831055    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:47:54.383738    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory
E1101 16:48:03.396207    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/calico-161859/client.crt: no such file or directory
E1101 16:48:17.519101    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
E1101 16:48:32.044113    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/skaffold-161741/client.crt: no such file or directory
E1101 16:49:41.617486    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/false-161859/client.crt: no such file or directory
E1101 16:50:02.304320    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:02.310673    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:02.320749    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:02.341544    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:02.382100    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:02.463358    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:02.624097    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:02.944675    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:03.585537    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:04.865720    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:07.428025    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:12.548256    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:22.788372    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:43.290338    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory
E1101 16:50:48.868957    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/bridge-161858/client.crt: no such file or directory
E1101 16:51:18.501711    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
E1101 16:51:24.251812    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/no-preload-163909/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-164600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (5m3.585632771s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-164600 -n embed-certs-164600
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (304.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-fh64j" [0af527ec-085d-43a5-9b84-093f0dbdab0f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-fh64j" [0af527ec-085d-43a5-9b84-093f0dbdab0f] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.016770785s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-fh64j" [0af527ec-085d-43a5-9b84-093f0dbdab0f] Running
E1101 16:52:41.544760    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/auto-161858/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009725009s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-164600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-164600 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-164600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-164600 -n embed-certs-164600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-164600 -n embed-certs-164600: exit status 2 (415.156212ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-164600 -n embed-certs-164600

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-164600 -n embed-certs-164600: exit status 2 (423.831309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-164600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-164600 -n embed-certs-164600
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-164600 -n embed-certs-164600
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-165249 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3
E1101 16:52:54.382157    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/cilium-161859/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-165249 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (45.353757629s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-165249 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [3f5380e3-b2bf-4fbf-967a-0fcfd7c3b7aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
helpers_test.go:342: "busybox" [3f5380e3-b2bf-4fbf-967a-0fcfd7c3b7aa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.016528743s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-165249 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-165249 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-165249 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-165249 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-165249 --alsologtostderr -v=3: (12.372542999s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-165249 -n default-k8s-diff-port-165249
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-165249 -n default-k8s-diff-port-165249: exit status 7 (162.707579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-165249 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-165249 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-165249 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (4m58.424547873s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-165249 -n default-k8s-diff-port-165249
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-24l22" [25363986-8738-4181-aa6b-a19eb8f85464] Pending
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-24l22" [25363986-8738-4181-aa6b-a19eb8f85464] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-24l22" [25363986-8738-4181-aa6b-a19eb8f85464] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.01211239s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-24l22" [25363986-8738-4181-aa6b-a19eb8f85464] Running
E1101 16:59:12.908326    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00825371s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-165249 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-165249 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-165249 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-165249 -n default-k8s-diff-port-165249
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-165249 -n default-k8s-diff-port-165249: exit status 2 (425.751321ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-165249 -n default-k8s-diff-port-165249
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-165249 -n default-k8s-diff-port-165249: exit status 2 (430.195294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-165249 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-165249 -n default-k8s-diff-port-165249
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-165249 -n default-k8s-diff-port-165249
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-165923 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-165923 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (43.300096886s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-165923 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-165923 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-165923 --alsologtostderr -v=3: (12.513801014s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-165923 -n newest-cni-165923
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-165923 -n newest-cni-165923: exit status 7 (114.989446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-165923 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-165923 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-165923 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (17.800251624s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-165923 -n newest-cni-165923
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-165923 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-165923 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-165923 -n newest-cni-165923
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-165923 -n newest-cni-165923: exit status 2 (425.470599ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-165923 -n newest-cni-165923
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-165923 -n newest-cni-165923: exit status 2 (426.319978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-165923 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-165923 -n newest-cni-165923
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-165923 -n newest-cni-165923
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.30s)

                                                
                                    

Test skip (18/295)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 14.942127ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-g2m7d" [86a624fa-67b8-4e34-aef2-47dd653c4507] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011253342s
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-t6c4k" [cbc4a754-c54b-4729-a3ff-1f831e68c9bf] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007088941s
addons_test.go:293: (dbg) Run:  kubectl --context addons-154449 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-154449 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-154449 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.739283578s)
addons_test.go:308: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-154449 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-154449 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:198: (dbg) Run:  kubectl --context addons-154449 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [aae0b412-5f32-42c4-94d7-0e841307e9ed] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [aae0b412-5f32-42c4-94d7-0e841307e9ed] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.010854033s
addons_test.go:215: (dbg) Run:  out/minikube-darwin-amd64 -p addons-154449 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:235: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.89s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-154936 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-154936 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-5d22q" [839acb4f-462d-446e-8501-e8a98374b4ba] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-6458c8fb6f-5d22q" [839acb4f-462d-446e-8501-e8a98374b4ba] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.007342203s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-161858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-161858
--- SKIP: TestNetworkPlugins/group/flannel (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-161859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-161859
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.59s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-165249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-165249
E1101 16:52:49.829316    3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/kubenet-161858/client.crt: no such file or directory
--- SKIP: TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                    
Copied to clipboard