Test Report: Docker_macOS 14269

                    
                      ab7bb61b313d0ba57acd833ecb833795c1bc5389:2022-06-02:24239
                    
                

Test fail (22/282)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (254.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220602101918-2113 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0602 10:19:22.829795    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:19:33.070690    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:19:53.553063    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:20:34.515552    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:21:56.436497    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:23:01.280407    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:01.286432    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:01.298161    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:01.320380    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:01.362671    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:01.444892    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:01.607075    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:01.929315    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:02.569527    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:03.851793    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:06.414064    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:11.534251    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:23:21.776587    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220602101918-2113 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.414446492s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220602101918-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node ingress-addon-legacy-20220602101918-2113 in cluster ingress-addon-legacy-20220602101918-2113
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:19:18.798683    4182 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:19:18.798898    4182 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:19:18.798902    4182 out.go:309] Setting ErrFile to fd 2...
	I0602 10:19:18.798906    4182 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:19:18.799009    4182 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:19:18.799334    4182 out.go:303] Setting JSON to false
	I0602 10:19:18.814290    4182 start.go:115] hostinfo: {"hostname":"37309.local","uptime":1128,"bootTime":1654189230,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:19:18.814396    4182 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:19:18.836521    4182 out.go:177] * [ingress-addon-legacy-20220602101918-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:19:18.879469    4182 notify.go:193] Checking for updates...
	I0602 10:19:18.901233    4182 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 10:19:18.923255    4182 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:19:18.945317    4182 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:19:18.967336    4182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:19:18.989204    4182 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 10:19:19.010547    4182 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 10:19:19.083620    4182 docker.go:137] docker version: linux-20.10.14
	I0602 10:19:19.083744    4182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:19:19.209205    4182 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-02 17:19:19.155642728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:19:19.252596    4182 out.go:177] * Using the docker driver based on user configuration
	I0602 10:19:19.273933    4182 start.go:284] selected driver: docker
	I0602 10:19:19.273960    4182 start.go:806] validating driver "docker" against <nil>
	I0602 10:19:19.273986    4182 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 10:19:19.277397    4182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:19:19.402851    4182 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-02 17:19:19.349431919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:19:19.402991    4182 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 10:19:19.403134    4182 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 10:19:19.425096    4182 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 10:19:19.446671    4182 cni.go:95] Creating CNI manager for ""
	I0602 10:19:19.446702    4182 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:19:19.446714    4182 start_flags.go:306] config:
	{Name:ingress-addon-legacy-20220602101918-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220602101918-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:19:19.468777    4182 out.go:177] * Starting control plane node ingress-addon-legacy-20220602101918-2113 in cluster ingress-addon-legacy-20220602101918-2113
	I0602 10:19:19.512713    4182 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 10:19:19.534761    4182 out.go:177] * Pulling base image ...
	I0602 10:19:19.577667    4182 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0602 10:19:19.577660    4182 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 10:19:19.645162    4182 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 10:19:19.645199    4182 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 10:19:19.649732    4182 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0602 10:19:19.649752    4182 cache.go:57] Caching tarball of preloaded images
	I0602 10:19:19.650103    4182 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0602 10:19:19.694197    4182 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0602 10:19:19.715557    4182 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0602 10:19:19.815409    4182 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0602 10:19:24.315792    4182 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0602 10:19:24.316017    4182 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0602 10:19:24.956619    4182 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0602 10:19:24.956895    4182 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/config.json ...
	I0602 10:19:24.956916    4182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/config.json: {Name:mk7daf2948a22633bcd97b6a8f584e46330af074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:19:24.957229    4182 cache.go:206] Successfully downloaded all kic artifacts
	I0602 10:19:24.957256    4182 start.go:352] acquiring machines lock for ingress-addon-legacy-20220602101918-2113: {Name:mk78015348215c00138775ca84c5e751799263fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:19:24.957420    4182 start.go:356] acquired machines lock for "ingress-addon-legacy-20220602101918-2113" in 156.363µs
	I0602 10:19:24.957440    4182 start.go:91] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220602101918-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202206021
01918-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 10:19:24.957545    4182 start.go:131] createHost starting for "" (driver="docker")
	I0602 10:19:24.979934    4182 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0602 10:19:24.980236    4182 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220602101918-2113" (driver="docker")
	I0602 10:19:24.980276    4182 client.go:168] LocalClient.Create starting
	I0602 10:19:24.980409    4182 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 10:19:24.980475    4182 main.go:134] libmachine: Decoding PEM data...
	I0602 10:19:24.980501    4182 main.go:134] libmachine: Parsing certificate...
	I0602 10:19:24.980594    4182 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 10:19:24.980647    4182 main.go:134] libmachine: Decoding PEM data...
	I0602 10:19:24.980695    4182 main.go:134] libmachine: Parsing certificate...
	I0602 10:19:24.981543    4182 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220602101918-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 10:19:25.048581    4182 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220602101918-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 10:19:25.048690    4182 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220602101918-2113] to gather additional debugging logs...
	I0602 10:19:25.048717    4182 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220602101918-2113
	W0602 10:19:25.112150    4182 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220602101918-2113 returned with exit code 1
	I0602 10:19:25.112177    4182 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220602101918-2113]: docker network inspect ingress-addon-legacy-20220602101918-2113: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220602101918-2113
	I0602 10:19:25.112208    4182 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220602101918-2113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220602101918-2113
	
	** /stderr **
	I0602 10:19:25.112308    4182 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 10:19:25.187141    4182 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000658198] misses:0}
	I0602 10:19:25.187184    4182 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:19:25.187201    4182 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220602101918-2113 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 10:19:25.187272    4182 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220602101918-2113
	I0602 10:19:25.308157    4182 network_create.go:99] docker network ingress-addon-legacy-20220602101918-2113 192.168.49.0/24 created
	I0602 10:19:25.308201    4182 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220602101918-2113" container
	I0602 10:19:25.308322    4182 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 10:19:25.370579    4182 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220602101918-2113 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220602101918-2113 --label created_by.minikube.sigs.k8s.io=true
	I0602 10:19:25.432525    4182 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220602101918-2113
	I0602 10:19:25.432667    4182 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220602101918-2113-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220602101918-2113 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220602101918-2113:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 10:19:25.900388    4182 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220602101918-2113
	I0602 10:19:25.900428    4182 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0602 10:19:25.900441    4182 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 10:19:25.900533    4182 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220602101918-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 10:19:29.884299    4182 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220602101918-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (3.983633037s)
	I0602 10:19:29.884321    4182 kic.go:188] duration metric: took 3.983863 seconds to extract preloaded images to volume
	I0602 10:19:29.884418    4182 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 10:19:30.008966    4182 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220602101918-2113 --name ingress-addon-legacy-20220602101918-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220602101918-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220602101918-2113 --network ingress-addon-legacy-20220602101918-2113 --ip 192.168.49.2 --volume ingress-addon-legacy-20220602101918-2113:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 10:19:30.368362    4182 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220602101918-2113 --format={{.State.Running}}
	I0602 10:19:30.436473    4182 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220602101918-2113 --format={{.State.Status}}
	I0602 10:19:30.506456    4182 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220602101918-2113 stat /var/lib/dpkg/alternatives/iptables
	I0602 10:19:30.625726    4182 oci.go:247] the created container "ingress-addon-legacy-20220602101918-2113" has a running status.
	I0602 10:19:30.625753    4182 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa...
	I0602 10:19:30.718810    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0602 10:19:30.718873    4182 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 10:19:30.830432    4182 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220602101918-2113 --format={{.State.Status}}
	I0602 10:19:30.896176    4182 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 10:19:30.896194    4182 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220602101918-2113 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 10:19:31.019761    4182 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220602101918-2113 --format={{.State.Status}}
	I0602 10:19:31.085300    4182 machine.go:88] provisioning docker machine ...
	I0602 10:19:31.085337    4182 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220602101918-2113"
	I0602 10:19:31.085423    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:31.151601    4182 main.go:134] libmachine: Using SSH client type: native
	I0602 10:19:31.151798    4182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52981 <nil> <nil>}
	I0602 10:19:31.151818    4182 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220602101918-2113 && echo "ingress-addon-legacy-20220602101918-2113" | sudo tee /etc/hostname
	I0602 10:19:31.280288    4182 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220602101918-2113
	
	I0602 10:19:31.280370    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:31.347146    4182 main.go:134] libmachine: Using SSH client type: native
	I0602 10:19:31.347449    4182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52981 <nil> <nil>}
	I0602 10:19:31.347465    4182 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220602101918-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220602101918-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220602101918-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 10:19:31.465025    4182 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 10:19:31.465049    4182 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 10:19:31.465069    4182 ubuntu.go:177] setting up certificates
	I0602 10:19:31.465081    4182 provision.go:83] configureAuth start
	I0602 10:19:31.465156    4182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:31.531783    4182 provision.go:138] copyHostCerts
	I0602 10:19:31.531819    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 10:19:31.531871    4182 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 10:19:31.531880    4182 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 10:19:31.531978    4182 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 10:19:31.532145    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 10:19:31.532187    4182 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 10:19:31.532191    4182 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 10:19:31.532250    4182 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 10:19:31.532360    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 10:19:31.532386    4182 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 10:19:31.532390    4182 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 10:19:31.532443    4182 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 10:19:31.532563    4182 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220602101918-2113 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220602101918-2113]
	I0602 10:19:31.699653    4182 provision.go:172] copyRemoteCerts
	I0602 10:19:31.699738    4182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 10:19:31.699791    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:31.770108    4182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa Username:docker}
	I0602 10:19:31.857225    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0602 10:19:31.857304    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0602 10:19:31.873844    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0602 10:19:31.873907    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 10:19:31.889923    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0602 10:19:31.889984    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 10:19:31.907122    4182 provision.go:86] duration metric: configureAuth took 442.028341ms
	I0602 10:19:31.907134    4182 ubuntu.go:193] setting minikube options for container-runtime
	I0602 10:19:31.907296    4182 config.go:178] Loaded profile config "ingress-addon-legacy-20220602101918-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0602 10:19:31.907358    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:31.974118    4182 main.go:134] libmachine: Using SSH client type: native
	I0602 10:19:31.974291    4182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52981 <nil> <nil>}
	I0602 10:19:31.974309    4182 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 10:19:32.092931    4182 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 10:19:32.092947    4182 ubuntu.go:71] root file system type: overlay
	I0602 10:19:32.093105    4182 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 10:19:32.093187    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:32.159481    4182 main.go:134] libmachine: Using SSH client type: native
	I0602 10:19:32.159732    4182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52981 <nil> <nil>}
	I0602 10:19:32.159781    4182 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 10:19:32.286445    4182 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 10:19:32.286541    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:32.353390    4182 main.go:134] libmachine: Using SSH client type: native
	I0602 10:19:32.353567    4182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52981 <nil> <nil>}
	I0602 10:19:32.353581    4182 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 10:19:32.925239    4182 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:19:32.287561735 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 10:19:32.925268    4182 machine.go:91] provisioned docker machine in 1.839941348s
	I0602 10:19:32.925275    4182 client.go:171] LocalClient.Create took 7.944959722s
	I0602 10:19:32.925291    4182 start.go:173] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220602101918-2113" took 7.945023119s
	I0602 10:19:32.925299    4182 start.go:306] post-start starting for "ingress-addon-legacy-20220602101918-2113" (driver="docker")
	I0602 10:19:32.925303    4182 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 10:19:32.925370    4182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 10:19:32.925418    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:32.993285    4182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa Username:docker}
	I0602 10:19:33.079079    4182 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 10:19:33.082339    4182 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 10:19:33.082354    4182 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 10:19:33.082362    4182 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 10:19:33.082367    4182 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 10:19:33.082385    4182 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 10:19:33.082493    4182 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 10:19:33.082623    4182 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 10:19:33.082629    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> /etc/ssl/certs/21132.pem
	I0602 10:19:33.082768    4182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 10:19:33.089438    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:19:33.105881    4182 start.go:309] post-start completed in 180.573712ms
	I0602 10:19:33.106357    4182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:33.172599    4182 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/config.json ...
	I0602 10:19:33.173026    4182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:19:33.173078    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:33.239452    4182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa Username:docker}
	I0602 10:19:33.323567    4182 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:19:33.328047    4182 start.go:134] duration metric: createHost completed in 8.370456075s
	I0602 10:19:33.328067    4182 start.go:81] releasing machines lock for "ingress-addon-legacy-20220602101918-2113", held for 8.370599599s
	I0602 10:19:33.328149    4182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:33.395060    4182 ssh_runner.go:195] Run: systemctl --version
	I0602 10:19:33.395063    4182 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 10:19:33.395134    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:33.395150    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:33.467217    4182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa Username:docker}
	I0602 10:19:33.468018    4182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa Username:docker}
	I0602 10:19:33.679467    4182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 10:19:33.689035    4182 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:19:33.698079    4182 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 10:19:33.698134    4182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 10:19:33.706672    4182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 10:19:33.719286    4182 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 10:19:33.783626    4182 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 10:19:33.847875    4182 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:19:33.857556    4182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 10:19:33.925640    4182 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 10:19:33.935010    4182 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:19:33.968141    4182 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:19:34.045279    4182 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.16 ...
	I0602 10:19:34.045487    4182 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220602101918-2113 dig +short host.docker.internal
	I0602 10:19:34.179937    4182 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 10:19:34.180028    4182 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 10:19:34.184326    4182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 10:19:34.194015    4182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:19:34.261826    4182 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0602 10:19:34.261891    4182 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:19:34.289906    4182 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0602 10:19:34.289921    4182 docker.go:541] Images already preloaded, skipping extraction
	I0602 10:19:34.289996    4182 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:19:34.319439    4182 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0602 10:19:34.319461    4182 cache_images.go:84] Images are preloaded, skipping loading
	I0602 10:19:34.319554    4182 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 10:19:34.392773    4182 cni.go:95] Creating CNI manager for ""
	I0602 10:19:34.392783    4182 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:19:34.392802    4182 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 10:19:34.392820    4182 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220602101918-2113 NodeName:ingress-addon-legacy-20220602101918-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:syst
emd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 10:19:34.392914    4182 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220602101918-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 10:19:34.392996    4182 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220602101918-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220602101918-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 10:19:34.393059    4182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0602 10:19:34.400343    4182 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 10:19:34.400404    4182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 10:19:34.407188    4182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0602 10:19:34.419183    4182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0602 10:19:34.431401    4182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2083 bytes)
	I0602 10:19:34.443494    4182 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 10:19:34.446965    4182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 10:19:34.456122    4182 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113 for IP: 192.168.49.2
	I0602 10:19:34.456230    4182 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 10:19:34.456279    4182 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 10:19:34.456326    4182 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/client.key
	I0602 10:19:34.456338    4182 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/client.crt with IP's: []
	I0602 10:19:34.823883    4182 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/client.crt ...
	I0602 10:19:34.823896    4182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/client.crt: {Name:mk810748670b080ffa5ed56c6c3061cbae890225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:19:34.824340    4182 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/client.key ...
	I0602 10:19:34.824348    4182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/client.key: {Name:mkea8612ee6dc3974dbcba1cbfdea3e9ce78be28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:19:34.824577    4182 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.key.dd3b5fb2
	I0602 10:19:34.824597    4182 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 10:19:34.896898    4182 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.crt.dd3b5fb2 ...
	I0602 10:19:34.896906    4182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.crt.dd3b5fb2: {Name:mk17be556cf81d3d482eb0676bec9e4b8ade0144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:19:34.897143    4182 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.key.dd3b5fb2 ...
	I0602 10:19:34.897160    4182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.key.dd3b5fb2: {Name:mk84e6f3e99fa7724e936c3809dac26032fed454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:19:34.897407    4182 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.crt
	I0602 10:19:34.897599    4182 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.key
	I0602 10:19:34.897786    4182 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.key
	I0602 10:19:34.897799    4182 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.crt with IP's: []
	I0602 10:19:35.002351    4182 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.crt ...
	I0602 10:19:35.002360    4182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.crt: {Name:mkeed057b3db956fa731b15bb2a4271f2b609e3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:19:35.002612    4182 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.key ...
	I0602 10:19:35.002619    4182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.key: {Name:mk91c7082809be94fe6c4dbd6d7f8445d8cdb26a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:19:35.002842    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0602 10:19:35.002867    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0602 10:19:35.002886    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0602 10:19:35.002921    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0602 10:19:35.002937    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0602 10:19:35.002954    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0602 10:19:35.002988    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0602 10:19:35.003030    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0602 10:19:35.003167    4182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 10:19:35.003210    4182 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 10:19:35.003218    4182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 10:19:35.003273    4182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 10:19:35.003322    4182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 10:19:35.003372    4182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 10:19:35.003441    4182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:19:35.003473    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> /usr/share/ca-certificates/21132.pem
	I0602 10:19:35.003492    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:19:35.003510    4182 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem -> /usr/share/ca-certificates/2113.pem
	I0602 10:19:35.003993    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 10:19:35.022558    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 10:19:35.038969    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 10:19:35.055441    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602101918-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 10:19:35.072682    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 10:19:35.088624    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 10:19:35.105134    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 10:19:35.121211    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 10:19:35.137734    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 10:19:35.154261    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 10:19:35.170663    4182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 10:19:35.186977    4182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 10:19:35.199239    4182 ssh_runner.go:195] Run: openssl version
	I0602 10:19:35.204355    4182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 10:19:35.211640    4182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:19:35.215491    4182 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:19:35.215532    4182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:19:35.220686    4182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 10:19:35.228157    4182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 10:19:35.235539    4182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 10:19:35.239204    4182 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 10:19:35.239256    4182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 10:19:35.244483    4182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 10:19:35.251845    4182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 10:19:35.259300    4182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 10:19:35.263139    4182 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 10:19:35.263178    4182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 10:19:35.268170    4182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 10:19:35.275686    4182 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220602101918-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220602101918-2113 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:19:35.275789    4182 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 10:19:35.303325    4182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 10:19:35.310672    4182 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 10:19:35.317796    4182 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 10:19:35.317845    4182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 10:19:35.324631    4182 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 10:19:35.324654    4182 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 10:19:36.038403    4182 out.go:204]   - Generating certificates and keys ...
	I0602 10:19:38.483408    4182 out.go:204]   - Booting up control plane ...
	W0602 10:21:33.405470    4182 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220602101918-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220602101918-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0602 17:19:35.373639     829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:19:38.473889     829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:19:38.474664     829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220602101918-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220602101918-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0602 17:19:35.373639     829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:19:38.473889     829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:19:38.474664     829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0602 10:21:33.405504    4182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 10:21:33.826390    4182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 10:21:33.835486    4182 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 10:21:33.835531    4182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 10:21:33.842883    4182 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 10:21:33.842903    4182 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 10:21:34.532262    4182 out.go:204]   - Generating certificates and keys ...
	I0602 10:21:35.611075    4182 out.go:204]   - Booting up control plane ...
	I0602 10:23:30.529722    4182 kubeadm.go:397] StartCluster complete in 3m55.253042072s
	I0602 10:23:30.529802    4182 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 10:23:30.558502    4182 logs.go:274] 0 containers: []
	W0602 10:23:30.558514    4182 logs.go:276] No container was found matching "kube-apiserver"
	I0602 10:23:30.558581    4182 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 10:23:30.585855    4182 logs.go:274] 0 containers: []
	W0602 10:23:30.585866    4182 logs.go:276] No container was found matching "etcd"
	I0602 10:23:30.585921    4182 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 10:23:30.613352    4182 logs.go:274] 0 containers: []
	W0602 10:23:30.613364    4182 logs.go:276] No container was found matching "coredns"
	I0602 10:23:30.613437    4182 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 10:23:30.641014    4182 logs.go:274] 0 containers: []
	W0602 10:23:30.641027    4182 logs.go:276] No container was found matching "kube-scheduler"
	I0602 10:23:30.641088    4182 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 10:23:30.669356    4182 logs.go:274] 0 containers: []
	W0602 10:23:30.669368    4182 logs.go:276] No container was found matching "kube-proxy"
	I0602 10:23:30.669433    4182 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 10:23:30.697382    4182 logs.go:274] 0 containers: []
	W0602 10:23:30.697396    4182 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 10:23:30.697464    4182 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 10:23:30.725353    4182 logs.go:274] 0 containers: []
	W0602 10:23:30.725364    4182 logs.go:276] No container was found matching "storage-provisioner"
	I0602 10:23:30.725429    4182 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 10:23:30.754055    4182 logs.go:274] 0 containers: []
	W0602 10:23:30.754067    4182 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 10:23:30.754074    4182 logs.go:123] Gathering logs for dmesg ...
	I0602 10:23:30.754081    4182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 10:23:30.765693    4182 logs.go:123] Gathering logs for describe nodes ...
	I0602 10:23:30.765705    4182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 10:23:30.815802    4182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 10:23:30.815814    4182 logs.go:123] Gathering logs for Docker ...
	I0602 10:23:30.815823    4182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 10:23:30.829596    4182 logs.go:123] Gathering logs for container status ...
	I0602 10:23:30.829607    4182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 10:23:32.884090    4182 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054463092s)
	I0602 10:23:32.884198    4182 logs.go:123] Gathering logs for kubelet ...
	I0602 10:23:32.884205    4182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0602 10:23:32.923572    4182 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0602 17:21:33.890665    3323 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:21:35.600987    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:21:35.601860    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0602 10:23:32.923591    4182 out.go:239] * 
	* 
	W0602 10:23:32.923701    4182 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0602 17:21:33.890665    3323 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:21:35.600987    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:21:35.601860    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0602 17:21:33.890665    3323 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:21:35.600987    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:21:35.601860    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 10:23:32.923718    4182 out.go:239] * 
	* 
	W0602 10:23:32.924262    4182 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 10:23:32.990125    4182 out.go:177] 
	W0602 10:23:33.055505    4182 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0602 17:21:33.890665    3323 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:21:35.600987    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:21:35.601860    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0602 17:21:33.890665    3323 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:21:35.600987    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:21:35.601860    3323 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 10:23:33.055704    4182 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0602 10:23:33.055827    4182 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0602 10:23:33.077326    4182 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220602101918-2113 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220602101918-2113 addons enable ingress --alsologtostderr -v=5
E0602 10:23:42.257889    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:24:12.587246    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:24:23.219214    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:24:40.279467    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220602101918-2113 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.099084411s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:23:33.248510    4361 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:23:33.248824    4361 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:23:33.248830    4361 out.go:309] Setting ErrFile to fd 2...
	I0602 10:23:33.248834    4361 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:23:33.248922    4361 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:23:33.249334    4361 config.go:178] Loaded profile config "ingress-addon-legacy-20220602101918-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0602 10:23:33.249349    4361 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220602101918-2113"
	I0602 10:23:33.249357    4361 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220602101918-2113"
	I0602 10:23:33.249650    4361 host.go:66] Checking if "ingress-addon-legacy-20220602101918-2113" exists ...
	I0602 10:23:33.250159    4361 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220602101918-2113 --format={{.State.Status}}
	I0602 10:23:33.345484    4361 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0602 10:23:33.365789    4361 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0602 10:23:33.387696    4361 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0602 10:23:33.409236    4361 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0602 10:23:33.430879    4361 addons.go:348] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0602 10:23:33.430925    4361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0602 10:23:33.431057    4361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:23:33.500356    4361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa Username:docker}
	I0602 10:23:33.591797    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:33.641663    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:33.641684    4361 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:33.918263    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:33.971334    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:33.971350    4361 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:34.513813    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:34.566476    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:34.566491    4361 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:35.223908    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:35.277363    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:35.277376    4361 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:36.070857    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:36.124592    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:36.124614    4361 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:37.297122    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:37.352242    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:37.352255    4361 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:39.607687    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:39.660141    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:39.660159    4361 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:41.272407    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:41.325953    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:41.325971    4361 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:44.131124    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:44.182948    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:44.182964    4361 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:48.010185    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:48.062817    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:48.062830    4361 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:55.760468    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:23:55.810367    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:23:55.810381    4361 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:24:10.448306    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:24:10.499705    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:24:10.499719    4361 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:24:38.908891    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:24:38.961242    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:24:38.961262    4361 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:02.131713    4361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0602 10:25:02.182528    4361 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:02.182549    4361 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20220602101918-2113"
	I0602 10:25:02.204274    4361 out.go:177] * Verifying ingress addon...
	I0602 10:25:02.227246    4361 out.go:177] 
	W0602 10:25:02.249191    4361 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220602101918-2113" does not exist: client config: context "ingress-addon-legacy-20220602101918-2113" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220602101918-2113" does not exist: client config: context "ingress-addon-legacy-20220602101918-2113" does not exist]
	W0602 10:25:02.249228    4361 out.go:239] * 
	* 
	W0602 10:25:02.252353    4361 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 10:25:02.274109    4361 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220602101918-2113
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220602101918-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c",
	        "Created": "2022-06-02T17:19:30.075891459Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28688,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:19:30.361233793Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/hosts",
	        "LogPath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c-json.log",
	        "Name": "/ingress-addon-legacy-20220602101918-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-20220602101918-2113:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220602101918-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220602101918-2113",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220602101918-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220602101918-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220602101918-2113",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220602101918-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9233dca11945262cbc7dfd64c05d05928acbf39e4b18f8dbbd86ddc7eb8a154e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52979"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52980"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9233dca11945",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220602101918-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8b8960229d33",
	                        "ingress-addon-legacy-20220602101918-2113"
	                    ],
	                    "NetworkID": "9df9039d72f716078f7f362b1db30cd25fdb120f419c395a279dc3f10dd90e56",
	                    "EndpointID": "0abdfacfd05bd9b7d6d316875ffb83a9ffce69967907594f1c8ce916138c01be",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220602101918-2113 -n ingress-addon-legacy-20220602101918-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220602101918-2113 -n ingress-addon-legacy-20220602101918-2113: exit status 6 (424.895783ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 10:25:02.782031    4409 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220602101918-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220602101918-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220602101918-2113 addons enable ingress-dns --alsologtostderr -v=5
E0602 10:25:45.140724    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220602101918-2113 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.004211048s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:25:02.839727    4419 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:25:02.840014    4419 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:25:02.840020    4419 out.go:309] Setting ErrFile to fd 2...
	I0602 10:25:02.840024    4419 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:25:02.840129    4419 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:25:02.840550    4419 config.go:178] Loaded profile config "ingress-addon-legacy-20220602101918-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0602 10:25:02.840564    4419 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220602101918-2113"
	I0602 10:25:02.840572    4419 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220602101918-2113"
	I0602 10:25:02.840810    4419 host.go:66] Checking if "ingress-addon-legacy-20220602101918-2113" exists ...
	I0602 10:25:02.841292    4419 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220602101918-2113 --format={{.State.Status}}
	I0602 10:25:02.928907    4419 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0602 10:25:02.955114    4419 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0602 10:25:02.976711    4419 addons.go:348] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0602 10:25:02.976750    4419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0602 10:25:02.976886    4419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220602101918-2113
	I0602 10:25:03.043797    4419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/ingress-addon-legacy-20220602101918-2113/id_rsa Username:docker}
	I0602 10:25:03.136048    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:03.185385    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:03.185413    4419 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:03.463848    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:03.516863    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:03.516881    4419 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:04.057718    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:04.108516    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:04.108533    4419 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:04.764064    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:04.812276    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:04.812298    4419 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:05.605779    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:05.656617    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:05.656641    4419 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:06.829188    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:06.881272    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:06.881294    4419 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:09.136370    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:09.187316    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:09.187333    4419 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:10.798867    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:10.848099    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:10.848114    4419 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:13.654756    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:13.705878    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:13.705892    4419 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:17.531020    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:17.581432    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:17.581451    4419 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:25.281232    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:25.333668    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:25.333682    4419 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:39.970122    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:25:40.021092    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:25:40.021107    4419 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:26:08.428103    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:26:08.479885    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:26:08.479902    4419 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:26:31.648583    4419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0602 10:26:31.704596    4419 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0602 10:26:31.726565    4419 out.go:177] 
	W0602 10:26:31.748220    4419 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0602 10:26:31.748250    4419 out.go:239] * 
	* 
	W0602 10:26:31.751264    4419 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 10:26:31.772478    4419 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220602101918-2113
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220602101918-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c",
	        "Created": "2022-06-02T17:19:30.075891459Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28688,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:19:30.361233793Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/hosts",
	        "LogPath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c-json.log",
	        "Name": "/ingress-addon-legacy-20220602101918-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-20220602101918-2113:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220602101918-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220602101918-2113",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220602101918-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220602101918-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220602101918-2113",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220602101918-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9233dca11945262cbc7dfd64c05d05928acbf39e4b18f8dbbd86ddc7eb8a154e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52979"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52980"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9233dca11945",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220602101918-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8b8960229d33",
	                        "ingress-addon-legacy-20220602101918-2113"
	                    ],
	                    "NetworkID": "9df9039d72f716078f7f362b1db30cd25fdb120f419c395a279dc3f10dd90e56",
	                    "EndpointID": "0abdfacfd05bd9b7d6d316875ffb83a9ffce69967907594f1c8ce916138c01be",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220602101918-2113 -n ingress-addon-legacy-20220602101918-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220602101918-2113 -n ingress-addon-legacy-20220602101918-2113: exit status 6 (423.833294ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 10:26:32.279792    4466 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220602101918-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220602101918-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:156: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220602101918-2113
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220602101918-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c",
	        "Created": "2022-06-02T17:19:30.075891459Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28688,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:19:30.361233793Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/hosts",
	        "LogPath": "/var/lib/docker/containers/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c/8b8960229d33ffdc8bafb9b5fa59bea96080282f84a4e691380c2f78c23b4f7c-json.log",
	        "Name": "/ingress-addon-legacy-20220602101918-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-20220602101918-2113:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220602101918-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e96abf649f7c80191118239f740d4e865d8d98d84813c7b4f3b515ca028b98c9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220602101918-2113",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220602101918-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220602101918-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220602101918-2113",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220602101918-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9233dca11945262cbc7dfd64c05d05928acbf39e4b18f8dbbd86ddc7eb8a154e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52979"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52980"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9233dca11945",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220602101918-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8b8960229d33",
	                        "ingress-addon-legacy-20220602101918-2113"
	                    ],
	                    "NetworkID": "9df9039d72f716078f7f362b1db30cd25fdb120f419c395a279dc3f10dd90e56",
	                    "EndpointID": "0abdfacfd05bd9b7d6d316875ffb83a9ffce69967907594f1c8ce916138c01be",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220602101918-2113 -n ingress-addon-legacy-20220602101918-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220602101918-2113 -n ingress-addon-legacy-20220602101918-2113: exit status 6 (425.413111ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 10:26:32.778452    4478 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220602101918-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220602101918-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.50s)

                                                
                                    
x
+
TestPreload (263.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220602103745-2113 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0602 10:38:01.291584    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 10:39:12.598126    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:39:24.355850    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220602103745-2113 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m20.273928581s)

                                                
                                                
-- stdout --
	* [test-preload-20220602103745-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node test-preload-20220602103745-2113 in cluster test-preload-20220602103745-2113
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:37:45.214516    7781 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:37:45.214711    7781 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:37:45.214716    7781 out.go:309] Setting ErrFile to fd 2...
	I0602 10:37:45.214720    7781 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:37:45.214847    7781 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:37:45.215170    7781 out.go:303] Setting JSON to false
	I0602 10:37:45.230519    7781 start.go:115] hostinfo: {"hostname":"37309.local","uptime":2235,"bootTime":1654189230,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:37:45.230636    7781 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:37:45.252099    7781 out.go:177] * [test-preload-20220602103745-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:37:45.294085    7781 notify.go:193] Checking for updates...
	I0602 10:37:45.316142    7781 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 10:37:45.337930    7781 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:37:45.359293    7781 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:37:45.381093    7781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:37:45.402842    7781 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 10:37:45.424347    7781 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 10:37:45.495708    7781 docker.go:137] docker version: linux-20.10.14
	I0602 10:37:45.495842    7781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:37:45.621343    7781 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:48 SystemTime:2022-06-02 17:37:45.565770673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:37:45.643456    7781 out.go:177] * Using the docker driver based on user configuration
	I0602 10:37:45.664943    7781 start.go:284] selected driver: docker
	I0602 10:37:45.664977    7781 start.go:806] validating driver "docker" against <nil>
	I0602 10:37:45.665002    7781 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 10:37:45.668436    7781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:37:45.793995    7781 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:48 SystemTime:2022-06-02 17:37:45.73896086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:37:45.794154    7781 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 10:37:45.794310    7781 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 10:37:45.816198    7781 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 10:37:45.837969    7781 cni.go:95] Creating CNI manager for ""
	I0602 10:37:45.838002    7781 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:37:45.838014    7781 start_flags.go:306] config:
	{Name:test-preload-20220602103745-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220602103745-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:37:45.859759    7781 out.go:177] * Starting control plane node test-preload-20220602103745-2113 in cluster test-preload-20220602103745-2113
	I0602 10:37:45.901979    7781 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 10:37:45.923723    7781 out.go:177] * Pulling base image ...
	I0602 10:37:45.966987    7781 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 10:37:45.967014    7781 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0602 10:37:45.967317    7781 cache.go:107] acquiring lock: {Name:mkdde9f9d80d920e7e403c8a91a985aa38c1e9d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:37:45.967329    7781 cache.go:107] acquiring lock: {Name:mk5d139db44e2ee8c6cc972051f78ce4370ae58b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:37:45.969022    7781 cache.go:107] acquiring lock: {Name:mk71cc5a6f9c75b0624a23a2bd3838c2853f3adf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:37:45.969326    7781 cache.go:107] acquiring lock: {Name:mk415b0ebdbe55fe95723c80382f28ea3252359a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:37:45.969330    7781 cache.go:107] acquiring lock: {Name:mk1fcca5e1a9514983b5673bfea8c6488576dff3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:37:45.969370    7781 cache.go:107] acquiring lock: {Name:mk870543ce7fe47f6b1026ac465f3451b04b6920 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:37:45.969397    7781 cache.go:107] acquiring lock: {Name:mk8f42084f4319da53c2e9aecca30bb32a8e1236 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:37:45.969418    7781 cache.go:107] acquiring lock: {Name:mk216f80f9af2b5c2a2119f4910d47b1d17e716e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:37:45.969478    7781 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0602 10:37:45.969507    7781 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.207997ms
	I0602 10:37:45.969540    7781 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0602 10:37:45.970041    7781 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0602 10:37:45.970071    7781 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0602 10:37:45.970148    7781 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0602 10:37:45.970183    7781 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0602 10:37:45.970398    7781 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0602 10:37:45.970441    7781 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0602 10:37:45.970436    7781 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0602 10:37:45.970636    7781 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/config.json ...
	I0602 10:37:45.970683    7781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/config.json: {Name:mkb8dfae06c0f2a3ddd82f3b1630c89608b70042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:37:45.976496    7781 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0602 10:37:45.976646    7781 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0602 10:37:45.977600    7781 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0602 10:37:45.978112    7781 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0602 10:37:45.978285    7781 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0602 10:37:45.978672    7781 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0602 10:37:45.978955    7781 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0602 10:37:46.037481    7781 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 10:37:46.037502    7781 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 10:37:46.037515    7781 cache.go:206] Successfully downloaded all kic artifacts
	I0602 10:37:46.037550    7781 start.go:352] acquiring machines lock for test-preload-20220602103745-2113: {Name:mk8e1abb166d99e75a654413684773cd44d25a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:37:46.037677    7781 start.go:356] acquired machines lock for "test-preload-20220602103745-2113" in 117.482µs
	I0602 10:37:46.037702    7781 start.go:91] Provisioning new machine with config: &{Name:test-preload-20220602103745-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220602103745-2113 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 10:37:46.037822    7781 start.go:131] createHost starting for "" (driver="docker")
	I0602 10:37:46.079676    7781 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0602 10:37:46.079904    7781 start.go:165] libmachine.API.Create for "test-preload-20220602103745-2113" (driver="docker")
	I0602 10:37:46.079931    7781 client.go:168] LocalClient.Create starting
	I0602 10:37:46.079998    7781 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 10:37:46.080032    7781 main.go:134] libmachine: Decoding PEM data...
	I0602 10:37:46.080044    7781 main.go:134] libmachine: Parsing certificate...
	I0602 10:37:46.080104    7781 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 10:37:46.080129    7781 main.go:134] libmachine: Decoding PEM data...
	I0602 10:37:46.080142    7781 main.go:134] libmachine: Parsing certificate...
	I0602 10:37:46.080562    7781 cli_runner.go:164] Run: docker network inspect test-preload-20220602103745-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 10:37:46.142533    7781 cli_runner.go:211] docker network inspect test-preload-20220602103745-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 10:37:46.142612    7781 network_create.go:272] running [docker network inspect test-preload-20220602103745-2113] to gather additional debugging logs...
	I0602 10:37:46.142626    7781 cli_runner.go:164] Run: docker network inspect test-preload-20220602103745-2113
	W0602 10:37:46.204349    7781 cli_runner.go:211] docker network inspect test-preload-20220602103745-2113 returned with exit code 1
	I0602 10:37:46.204367    7781 network_create.go:275] error running [docker network inspect test-preload-20220602103745-2113]: docker network inspect test-preload-20220602103745-2113: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220602103745-2113
	I0602 10:37:46.204378    7781 network_create.go:277] output of [docker network inspect test-preload-20220602103745-2113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220602103745-2113
	
	** /stderr **
	I0602 10:37:46.204433    7781 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 10:37:46.266720    7781 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e6d0] misses:0}
	I0602 10:37:46.266754    7781 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:37:46.266770    7781 network_create.go:115] attempt to create docker network test-preload-20220602103745-2113 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 10:37:46.266827    7781 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220602103745-2113
	I0602 10:37:46.359390    7781 network_create.go:99] docker network test-preload-20220602103745-2113 192.168.49.0/24 created
	I0602 10:37:46.359412    7781 kic.go:106] calculated static IP "192.168.49.2" for the "test-preload-20220602103745-2113" container
	I0602 10:37:46.359480    7781 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 10:37:46.421654    7781 cli_runner.go:164] Run: docker volume create test-preload-20220602103745-2113 --label name.minikube.sigs.k8s.io=test-preload-20220602103745-2113 --label created_by.minikube.sigs.k8s.io=true
	I0602 10:37:46.484302    7781 oci.go:103] Successfully created a docker volume test-preload-20220602103745-2113
	I0602 10:37:46.484377    7781 cli_runner.go:164] Run: docker run --rm --name test-preload-20220602103745-2113-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220602103745-2113 --entrypoint /usr/bin/test -v test-preload-20220602103745-2113:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 10:37:46.489234    7781 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0602 10:37:46.490812    7781 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0602 10:37:46.516880    7781 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0602 10:37:46.517803    7781 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0602 10:37:46.518929    7781 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0602 10:37:46.533516    7781 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0602 10:37:46.540599    7781 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0602 10:37:46.595354    7781 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0602 10:37:46.595369    7781 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 627.111849ms
	I0602 10:37:46.595379    7781 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0602 10:37:47.014658    7781 oci.go:107] Successfully prepared a docker volume test-preload-20220602103745-2113
	I0602 10:37:47.014707    7781 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0602 10:37:47.014807    7781 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 10:37:47.142715    7781 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220602103745-2113 --name test-preload-20220602103745-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220602103745-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220602103745-2113 --network test-preload-20220602103745-2113 --ip 192.168.49.2 --volume test-preload-20220602103745-2113:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 10:37:47.442165    7781 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0602 10:37:47.442192    7781 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 1.473003204s
	I0602 10:37:47.442201    7781 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0602 10:37:47.529039    7781 cli_runner.go:164] Run: docker container inspect test-preload-20220602103745-2113 --format={{.State.Running}}
	I0602 10:37:47.602077    7781 cli_runner.go:164] Run: docker container inspect test-preload-20220602103745-2113 --format={{.State.Status}}
	I0602 10:37:47.675166    7781 cli_runner.go:164] Run: docker exec test-preload-20220602103745-2113 stat /var/lib/dpkg/alternatives/iptables
	I0602 10:37:47.800250    7781 oci.go:247] the created container "test-preload-20220602103745-2113" has a running status.
	I0602 10:37:47.800278    7781 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/test-preload-20220602103745-2113/id_rsa...
	I0602 10:37:48.196570    7781 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/test-preload-20220602103745-2113/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 10:37:48.319620    7781 cli_runner.go:164] Run: docker container inspect test-preload-20220602103745-2113 --format={{.State.Status}}
	I0602 10:37:48.387329    7781 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 10:37:48.387343    7781 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220602103745-2113 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 10:37:48.420237    7781 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0602 10:37:48.420264    7781 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 2.451263632s
	I0602 10:37:48.420283    7781 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0602 10:37:48.513843    7781 cli_runner.go:164] Run: docker container inspect test-preload-20220602103745-2113 --format={{.State.Status}}
	I0602 10:37:48.581514    7781 machine.go:88] provisioning docker machine ...
	I0602 10:37:48.581540    7781 ubuntu.go:169] provisioning hostname "test-preload-20220602103745-2113"
	I0602 10:37:48.581607    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:48.649312    7781 main.go:134] libmachine: Using SSH client type: native
	I0602 10:37:48.649546    7781 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58814 <nil> <nil>}
	I0602 10:37:48.649559    7781 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220602103745-2113 && echo "test-preload-20220602103745-2113" | sudo tee /etc/hostname
	I0602 10:37:48.746308    7781 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0602 10:37:48.746326    7781 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 2.779023526s
	I0602 10:37:48.746338    7781 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0602 10:37:48.771195    7781 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220602103745-2113
	
	I0602 10:37:48.771265    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:48.778655    7781 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0602 10:37:48.778686    7781 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 2.80969384s
	I0602 10:37:48.778702    7781 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0602 10:37:48.838434    7781 main.go:134] libmachine: Using SSH client type: native
	I0602 10:37:48.838584    7781 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58814 <nil> <nil>}
	I0602 10:37:48.838601    7781 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220602103745-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220602103745-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220602103745-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 10:37:48.953652    7781 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 10:37:48.953675    7781 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 10:37:48.953700    7781 ubuntu.go:177] setting up certificates
	I0602 10:37:48.953729    7781 provision.go:83] configureAuth start
	I0602 10:37:48.953801    7781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220602103745-2113
	I0602 10:37:49.022087    7781 provision.go:138] copyHostCerts
	I0602 10:37:49.022174    7781 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 10:37:49.022186    7781 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 10:37:49.022291    7781 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 10:37:49.022556    7781 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 10:37:49.022569    7781 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 10:37:49.022650    7781 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 10:37:49.022877    7781 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 10:37:49.022885    7781 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 10:37:49.022970    7781 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 10:37:49.023130    7781 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220602103745-2113 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220602103745-2113]
	I0602 10:37:49.067030    7781 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0602 10:37:49.067051    7781 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 3.097932684s
	I0602 10:37:49.067064    7781 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0602 10:37:49.144962    7781 provision.go:172] copyRemoteCerts
	I0602 10:37:49.145024    7781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 10:37:49.145071    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:49.211422    7781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58814 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/test-preload-20220602103745-2113/id_rsa Username:docker}
	I0602 10:37:49.295110    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 10:37:49.312147    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0602 10:37:49.329189    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 10:37:49.337541    7781 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0602 10:37:49.337557    7781 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 3.368375762s
	I0602 10:37:49.337566    7781 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0602 10:37:49.337580    7781 cache.go:87] Successfully saved all images to host disk.
	I0602 10:37:49.345815    7781 provision.go:86] duration metric: configureAuth took 392.064368ms
	I0602 10:37:49.345826    7781 ubuntu.go:193] setting minikube options for container-runtime
	I0602 10:37:49.345952    7781 config.go:178] Loaded profile config "test-preload-20220602103745-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0602 10:37:49.346015    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:49.413528    7781 main.go:134] libmachine: Using SSH client type: native
	I0602 10:37:49.413880    7781 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58814 <nil> <nil>}
	I0602 10:37:49.413895    7781 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 10:37:49.528379    7781 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 10:37:49.528396    7781 ubuntu.go:71] root file system type: overlay
	I0602 10:37:49.528601    7781 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 10:37:49.528684    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:49.595012    7781 main.go:134] libmachine: Using SSH client type: native
	I0602 10:37:49.595232    7781 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58814 <nil> <nil>}
	I0602 10:37:49.595296    7781 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 10:37:49.722315    7781 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 10:37:49.722395    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:49.789365    7781 main.go:134] libmachine: Using SSH client type: native
	I0602 10:37:49.789573    7781 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58814 <nil> <nil>}
	I0602 10:37:49.789589    7781 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 10:37:50.370747    7781 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:37:49.736088592 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 10:37:50.370766    7781 machine.go:91] provisioned docker machine in 1.78923159s
	I0602 10:37:50.370772    7781 client.go:171] LocalClient.Create took 4.290819207s
	I0602 10:37:50.370807    7781 start.go:173] duration metric: libmachine.API.Create for "test-preload-20220602103745-2113" took 4.290881851s
	I0602 10:37:50.370817    7781 start.go:306] post-start starting for "test-preload-20220602103745-2113" (driver="docker")
	I0602 10:37:50.370821    7781 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 10:37:50.370881    7781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 10:37:50.370929    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:50.438220    7781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58814 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/test-preload-20220602103745-2113/id_rsa Username:docker}
	I0602 10:37:50.524667    7781 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 10:37:50.528140    7781 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 10:37:50.528154    7781 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 10:37:50.528161    7781 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 10:37:50.528168    7781 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 10:37:50.528182    7781 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 10:37:50.528285    7781 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 10:37:50.528424    7781 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 10:37:50.528565    7781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 10:37:50.535378    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:37:50.553745    7781 start.go:309] post-start completed in 182.920014ms
	I0602 10:37:50.554302    7781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220602103745-2113
	I0602 10:37:50.620896    7781 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/config.json ...
	I0602 10:37:50.621374    7781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:37:50.621420    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:50.687922    7781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58814 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/test-preload-20220602103745-2113/id_rsa Username:docker}
	I0602 10:37:50.771960    7781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:37:50.776412    7781 start.go:134] duration metric: createHost completed in 4.738561476s
	I0602 10:37:50.776425    7781 start.go:81] releasing machines lock for "test-preload-20220602103745-2113", held for 4.738722155s
	I0602 10:37:50.776491    7781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220602103745-2113
	I0602 10:37:50.843009    7781 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 10:37:50.843015    7781 ssh_runner.go:195] Run: systemctl --version
	I0602 10:37:50.843077    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:50.843076    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:50.913971    7781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58814 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/test-preload-20220602103745-2113/id_rsa Username:docker}
	I0602 10:37:50.914440    7781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58814 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/test-preload-20220602103745-2113/id_rsa Username:docker}
	I0602 10:37:51.129257    7781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 10:37:51.138220    7781 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:37:51.147409    7781 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 10:37:51.147462    7781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 10:37:51.156505    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 10:37:51.169492    7781 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 10:37:51.244678    7781 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 10:37:51.308109    7781 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:37:51.317629    7781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 10:37:51.383501    7781 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 10:37:51.392888    7781 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:37:51.427407    7781 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:37:51.483013    7781 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.16 ...
	I0602 10:37:51.483165    7781 cli_runner.go:164] Run: docker exec -t test-preload-20220602103745-2113 dig +short host.docker.internal
	I0602 10:37:51.617875    7781 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 10:37:51.618069    7781 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 10:37:51.622546    7781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 10:37:51.632441    7781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220602103745-2113
	I0602 10:37:51.698751    7781 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0602 10:37:51.698805    7781 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:37:51.727413    7781 docker.go:610] Got preloaded images: 
	I0602 10:37:51.727426    7781 docker.go:616] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0602 10:37:51.727434    7781 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0602 10:37:51.733119    7781 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0602 10:37:51.733736    7781 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0602 10:37:51.734001    7781 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0602 10:37:51.734629    7781 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0602 10:37:51.735317    7781 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0602 10:37:51.735625    7781 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 10:37:51.736300    7781 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0602 10:37:51.736455    7781 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0602 10:37:51.741321    7781 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0602 10:37:51.741602    7781 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0602 10:37:51.742509    7781 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0602 10:37:51.742852    7781 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0602 10:37:51.744256    7781 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0602 10:37:51.744272    7781 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0602 10:37:51.744275    7781 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0602 10:37:51.744823    7781 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0602 10:37:52.141006    7781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0602 10:37:52.156201    7781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0602 10:37:52.172360    7781 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0602 10:37:52.172395    7781 docker.go:291] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0602 10:37:52.172447    7781 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0602 10:37:52.187622    7781 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0602 10:37:52.187643    7781 docker.go:291] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0602 10:37:52.187694    7781 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0602 10:37:52.188988    7781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0602 10:37:52.205514    7781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0602 10:37:52.205635    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0602 10:37:52.219602    7781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0602 10:37:52.219719    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0602 10:37:52.224924    7781 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0602 10:37:52.224940    7781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0602 10:37:52.224947    7781 docker.go:291] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0602 10:37:52.224977    7781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0602 10:37:52.224981    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0602 10:37:52.224993    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0602 10:37:52.224994    7781 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0602 10:37:52.229183    7781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0602 10:37:52.259340    7781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0602 10:37:52.277837    7781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0602 10:37:52.290640    7781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0602 10:37:52.290828    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0602 10:37:52.312147    7781 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0602 10:37:52.312173    7781 docker.go:291] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0602 10:37:52.312229    7781 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0602 10:37:52.355676    7781 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0602 10:37:52.355717    7781 docker.go:291] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0602 10:37:52.355797    7781 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0602 10:37:52.359116    7781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0602 10:37:52.374504    7781 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0602 10:37:52.374537    7781 docker.go:291] Removing image: k8s.gcr.io/pause:3.1
	I0602 10:37:52.374554    7781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0602 10:37:52.374603    7781 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0602 10:37:52.374603    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0602 10:37:52.402213    7781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0602 10:37:52.402340    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0602 10:37:52.458303    7781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0602 10:37:52.458457    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0602 10:37:52.466358    7781 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0602 10:37:52.466383    7781 docker.go:291] Removing image: k8s.gcr.io/coredns:1.6.5
	I0602 10:37:52.466435    7781 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0602 10:37:52.494788    7781 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0602 10:37:52.494820    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0602 10:37:52.497752    7781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0602 10:37:52.497897    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0602 10:37:52.527771    7781 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0602 10:37:52.527797    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0602 10:37:52.566855    7781 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 10:37:52.567557    7781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0602 10:37:52.567742    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0602 10:37:52.574051    7781 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0602 10:37:52.574083    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0602 10:37:52.673441    7781 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0602 10:37:52.673444    7781 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0602 10:37:52.673483    7781 docker.go:291] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 10:37:52.673503    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0602 10:37:52.673537    7781 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 10:37:52.716472    7781 docker.go:258] Loading image: /var/lib/minikube/images/pause_3.1
	I0602 10:37:52.716485    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0602 10:37:52.768937    7781 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0602 10:37:52.769063    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0602 10:37:52.980094    7781 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0602 10:37:52.980131    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0602 10:37:52.983715    7781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0602 10:37:53.977269    7781 docker.go:258] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0602 10:37:53.977302    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0602 10:37:54.603349    7781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0602 10:37:54.603375    7781 docker.go:258] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0602 10:37:54.603388    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0602 10:37:55.415545    7781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0602 10:37:55.415580    7781 docker.go:258] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0602 10:37:55.415610    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0602 10:37:57.336558    7781 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load": (1.92092386s)
	I0602 10:37:57.336572    7781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0602 10:37:57.336589    7781 docker.go:258] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0602 10:37:57.336601    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0602 10:37:58.261266    7781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0602 10:37:58.261295    7781 docker.go:258] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0602 10:37:58.261308    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0602 10:37:59.336566    7781 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load": (1.075235795s)
	I0602 10:37:59.336581    7781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0602 10:37:59.336598    7781 docker.go:258] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0602 10:37:59.336609    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0602 10:38:00.323619    7781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0602 10:38:00.323651    7781 docker.go:258] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0602 10:38:00.323667    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0602 10:38:03.268696    7781 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (2.945002161s)
	I0602 10:38:03.268712    7781 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0602 10:38:03.268733    7781 cache_images.go:123] Successfully loaded all cached images
	I0602 10:38:03.268738    7781 cache_images.go:92] LoadImages completed in 11.541242372s
	I0602 10:38:03.268808    7781 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 10:38:03.342165    7781 cni.go:95] Creating CNI manager for ""
	I0602 10:38:03.342177    7781 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:38:03.342186    7781 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 10:38:03.342196    7781 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220602103745-2113 NodeName:test-preload-20220602103745-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 10:38:03.342288    7781 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220602103745-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 10:38:03.342351    7781 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220602103745-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220602103745-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 10:38:03.342405    7781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0602 10:38:03.349967    7781 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0602 10:38:03.350016    7781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0602 10:38:03.357319    7781 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0602 10:38:03.357321    7781 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0602 10:38:03.357323    7781 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0602 10:38:04.352202    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0602 10:38:04.357248    7781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0602 10:38:04.357273    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0602 10:38:04.718331    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0602 10:38:04.779628    7781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0602 10:38:04.779666    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0602 10:38:05.765367    7781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 10:38:05.775880    7781 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0602 10:38:05.779794    7781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0602 10:38:05.779818    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0602 10:38:07.457634    7781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 10:38:07.465839    7781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0602 10:38:07.477847    7781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 10:38:07.490979    7781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0602 10:38:07.503572    7781 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 10:38:07.507048    7781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 10:38:07.518839    7781 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113 for IP: 192.168.49.2
	I0602 10:38:07.518943    7781 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 10:38:07.518990    7781 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 10:38:07.519052    7781 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/client.key
	I0602 10:38:07.519069    7781 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/client.crt with IP's: []
	I0602 10:38:07.609703    7781 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/client.crt ...
	I0602 10:38:07.609713    7781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/client.crt: {Name:mka430ac107cab56c59242aab76c86e9a182de64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:38:07.609980    7781 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/client.key ...
	I0602 10:38:07.609987    7781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/client.key: {Name:mk746c7cb0f1c3f5773e4191754c540e5154c0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:38:07.610169    7781 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.key.dd3b5fb2
	I0602 10:38:07.610184    7781 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 10:38:07.738976    7781 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.crt.dd3b5fb2 ...
	I0602 10:38:07.738991    7781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.crt.dd3b5fb2: {Name:mk62347f2c8f102ebbd4f1af6d85fa8bef464c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:38:07.739264    7781 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.key.dd3b5fb2 ...
	I0602 10:38:07.739274    7781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.key.dd3b5fb2: {Name:mkf8f3d01d6cdbe650334338319737cfabdb3f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:38:07.739452    7781 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.crt
	I0602 10:38:07.739609    7781 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.key
	I0602 10:38:07.739764    7781 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/proxy-client.key
	I0602 10:38:07.739781    7781 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/proxy-client.crt with IP's: []
	I0602 10:38:07.895833    7781 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/proxy-client.crt ...
	I0602 10:38:07.895844    7781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/proxy-client.crt: {Name:mkd921377d8fc367de37ae7db346cd55233efc5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:38:07.896096    7781 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/proxy-client.key ...
	I0602 10:38:07.896105    7781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/proxy-client.key: {Name:mk9f352b94d9b1d667ae63c059432407a33e686b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:38:07.896490    7781 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 10:38:07.896526    7781 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 10:38:07.896561    7781 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 10:38:07.896626    7781 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 10:38:07.896665    7781 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 10:38:07.896730    7781 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 10:38:07.896811    7781 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:38:07.897264    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 10:38:07.915026    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 10:38:07.931451    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 10:38:07.947782    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/test-preload-20220602103745-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0602 10:38:07.964196    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 10:38:07.981445    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 10:38:07.998163    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 10:38:08.014881    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 10:38:08.032437    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 10:38:08.050290    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 10:38:08.066841    7781 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 10:38:08.083747    7781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 10:38:08.096063    7781 ssh_runner.go:195] Run: openssl version
	I0602 10:38:08.101326    7781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 10:38:08.109240    7781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 10:38:08.113105    7781 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 10:38:08.113141    7781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 10:38:08.118209    7781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 10:38:08.125859    7781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 10:38:08.133573    7781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:38:08.137360    7781 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:38:08.137398    7781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:38:08.142396    7781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 10:38:08.149985    7781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 10:38:08.157811    7781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 10:38:08.161724    7781 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 10:38:08.161766    7781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 10:38:08.167076    7781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 10:38:08.174500    7781 kubeadm.go:395] StartCluster: {Name:test-preload-20220602103745-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220602103745-2113 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false}
	I0602 10:38:08.174592    7781 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 10:38:08.203041    7781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 10:38:08.210401    7781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 10:38:08.217665    7781 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 10:38:08.217709    7781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 10:38:08.225639    7781 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 10:38:08.225662    7781 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 10:38:08.942325    7781 out.go:204]   - Generating certificates and keys ...
	I0602 10:38:10.868737    7781 out.go:204]   - Booting up control plane ...
	W0602 10:40:05.807292    7781 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220602103745-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220602103745-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0602 17:38:08.275775    1458 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0602 17:38:08.275826    1458 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:38:10.882308    1458 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:38:10.883064    1458 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220602103745-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220602103745-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0602 17:38:08.275775    1458 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0602 17:38:08.275826    1458 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:38:10.882308    1458 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:38:10.883064    1458 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0602 10:40:05.807328    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 10:40:06.231772    7781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 10:40:06.241271    7781 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 10:40:06.241328    7781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 10:40:06.249829    7781 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 10:40:06.249849    7781 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 10:40:06.947856    7781 out.go:204]   - Generating certificates and keys ...
	I0602 10:40:07.892378    7781 out.go:204]   - Booting up control plane ...
	I0602 10:42:02.807627    7781 kubeadm.go:397] StartCluster complete in 3m54.63220167s
	I0602 10:42:02.807704    7781 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 10:42:02.835770    7781 logs.go:274] 0 containers: []
	W0602 10:42:02.835784    7781 logs.go:276] No container was found matching "kube-apiserver"
	I0602 10:42:02.835842    7781 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 10:42:02.863266    7781 logs.go:274] 0 containers: []
	W0602 10:42:02.863278    7781 logs.go:276] No container was found matching "etcd"
	I0602 10:42:02.863335    7781 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 10:42:02.892550    7781 logs.go:274] 0 containers: []
	W0602 10:42:02.892565    7781 logs.go:276] No container was found matching "coredns"
	I0602 10:42:02.892630    7781 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 10:42:02.921434    7781 logs.go:274] 0 containers: []
	W0602 10:42:02.921446    7781 logs.go:276] No container was found matching "kube-scheduler"
	I0602 10:42:02.921505    7781 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 10:42:02.950875    7781 logs.go:274] 0 containers: []
	W0602 10:42:02.950888    7781 logs.go:276] No container was found matching "kube-proxy"
	I0602 10:42:02.950944    7781 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 10:42:02.978731    7781 logs.go:274] 0 containers: []
	W0602 10:42:02.978743    7781 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 10:42:02.978800    7781 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 10:42:03.006943    7781 logs.go:274] 0 containers: []
	W0602 10:42:03.006955    7781 logs.go:276] No container was found matching "storage-provisioner"
	I0602 10:42:03.007015    7781 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 10:42:03.034750    7781 logs.go:274] 0 containers: []
	W0602 10:42:03.034763    7781 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 10:42:03.034770    7781 logs.go:123] Gathering logs for kubelet ...
	I0602 10:42:03.034777    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 10:42:03.073876    7781 logs.go:123] Gathering logs for dmesg ...
	I0602 10:42:03.073889    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 10:42:03.086568    7781 logs.go:123] Gathering logs for describe nodes ...
	I0602 10:42:03.086580    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 10:42:03.138610    7781 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 10:42:03.138621    7781 logs.go:123] Gathering logs for Docker ...
	I0602 10:42:03.138628    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 10:42:03.152113    7781 logs.go:123] Gathering logs for container status ...
	I0602 10:42:03.152126    7781 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 10:42:05.206365    7781 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054219777s)
	W0602 10:42:05.206484    7781 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0602 17:40:06.298686    3717 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0602 17:40:06.298739    3717 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:40:07.881861    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:40:07.882918    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0602 10:42:05.206500    7781 out.go:239] * 
	* 
	W0602 10:42:05.206617    7781 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0602 17:40:06.298686    3717 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0602 17:40:06.298739    3717 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:40:07.881861    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:40:07.882918    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0602 17:40:06.298686    3717 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0602 17:40:06.298739    3717 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:40:07.881861    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:40:07.882918    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 10:42:05.206631    7781 out.go:239] * 
	* 
	W0602 10:42:05.207167    7781 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 10:42:05.270129    7781 out.go:177] 
	W0602 10:42:05.312227    7781 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0602 17:40:06.298686    3717 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0602 17:40:06.298739    3717 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:40:07.881861    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:40:07.882918    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0602 17:40:06.298686    3717 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0602 17:40:06.298739    3717 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0602 17:40:07.881861    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0602 17:40:07.882918    3717 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 10:42:05.333353    7781 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0602 10:42:05.333424    7781 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0602 10:42:05.375890    7781 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220602103745-2113 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-06-02 10:42:05.480214 -0700 PDT m=+1820.591962656
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220602103745-2113
helpers_test.go:235: (dbg) docker inspect test-preload-20220602103745-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a18ad4e46c43c60ccb4e0441c291023f56c08dee00950ea176d120f7d1dabe5f",
	        "Created": "2022-06-02T17:37:47.221135556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 91319,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:37:47.532326762Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/a18ad4e46c43c60ccb4e0441c291023f56c08dee00950ea176d120f7d1dabe5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a18ad4e46c43c60ccb4e0441c291023f56c08dee00950ea176d120f7d1dabe5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/a18ad4e46c43c60ccb4e0441c291023f56c08dee00950ea176d120f7d1dabe5f/hosts",
	        "LogPath": "/var/lib/docker/containers/a18ad4e46c43c60ccb4e0441c291023f56c08dee00950ea176d120f7d1dabe5f/a18ad4e46c43c60ccb4e0441c291023f56c08dee00950ea176d120f7d1dabe5f-json.log",
	        "Name": "/test-preload-20220602103745-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220602103745-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220602103745-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ce6a0ac964e5c78c945329f870abd2c2c89b26e40cf09ce2b7cb517b27785e9a-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce6a0ac964e5c78c945329f870abd2c2c89b26e40cf09ce2b7cb517b27785e9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce6a0ac964e5c78c945329f870abd2c2c89b26e40cf09ce2b7cb517b27785e9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce6a0ac964e5c78c945329f870abd2c2c89b26e40cf09ce2b7cb517b27785e9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220602103745-2113",
	                "Source": "/var/lib/docker/volumes/test-preload-20220602103745-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220602103745-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220602103745-2113",
	                "name.minikube.sigs.k8s.io": "test-preload-20220602103745-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c7de05d607db2f96160948b1ce3dbc5bab7219459e936da1eb6a2f8c01bd27e8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58814"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58815"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58816"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58817"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58818"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c7de05d607db",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220602103745-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a18ad4e46c43",
	                        "test-preload-20220602103745-2113"
	                    ],
	                    "NetworkID": "f244bd23b062df4598df75b1eb65d204256d2aaaea7b5d7f2bee0f7474df0523",
	                    "EndpointID": "02755a32efd7529d191dbfcbb004662aa41bcece1284e043cda61da2b06cc675",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220602103745-2113 -n test-preload-20220602103745-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220602103745-2113 -n test-preload-20220602103745-2113: exit status 6 (424.727667ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 10:42:05.964258    8001 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220602103745-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220602103745-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220602103745-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220602103745-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220602103745-2113: (2.544816204s)
--- FAIL: TestPreload (263.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3083866460.exe start -p running-upgrade-20220602104647-2113 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3083866460.exe start -p running-upgrade-20220602104647-2113 --memory=2200 --vm-driver=docker : exit status 70 (34.666510236s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220602104647-2113] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig152345521
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:47:04.700229518 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220602104647-2113" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:47:20.941228494 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220602104647-2113", then "minikube start -p running-upgrade-20220602104647-2113 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.25.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.31 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.73 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 58.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 80.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 103.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 124.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 147.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 229.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 273.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 339.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 450.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 494.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 516.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 537.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:47:20.941228494 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3083866460.exe start -p running-upgrade-20220602104647-2113 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3083866460.exe start -p running-upgrade-20220602104647-2113 --memory=2200 --vm-driver=docker : exit status 70 (4.582843794s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220602104647-2113] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3442922302
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220602104647-2113" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3083866460.exe start -p running-upgrade-20220602104647-2113 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3083866460.exe start -p running-upgrade-20220602104647-2113 --memory=2200 --vm-driver=docker : exit status 70 (4.573492169s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220602104647-2113] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3429286845
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220602104647-2113" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-06-02 10:47:35.448295 -0700 PDT m=+2150.558614236
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220602104647-2113
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220602104647-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04b82f93c87d9e2fbfe9c89d62edde8a1bc86d669ef468c061fbd9e55833995e",
	        "Created": "2022-06-02T17:47:12.910788165Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 125137,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:47:13.172242249Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/04b82f93c87d9e2fbfe9c89d62edde8a1bc86d669ef468c061fbd9e55833995e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04b82f93c87d9e2fbfe9c89d62edde8a1bc86d669ef468c061fbd9e55833995e/hostname",
	        "HostsPath": "/var/lib/docker/containers/04b82f93c87d9e2fbfe9c89d62edde8a1bc86d669ef468c061fbd9e55833995e/hosts",
	        "LogPath": "/var/lib/docker/containers/04b82f93c87d9e2fbfe9c89d62edde8a1bc86d669ef468c061fbd9e55833995e/04b82f93c87d9e2fbfe9c89d62edde8a1bc86d669ef468c061fbd9e55833995e-json.log",
	        "Name": "/running-upgrade-20220602104647-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220602104647-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a6867e09e1ac90259f317e51ca494f1ef69d61895584f8e15269a94eb173771e-init/diff:/var/lib/docker/overlay2/68730985f7cfd3b645dffaaf625a84e0f45a2e522a7bbd35c74f3e961455c955/diff:/var/lib/docker/overlay2/086a9a5d11913cdd684dceb8ac095d883dd96aeffd0e2f279790b7c3992d505d/diff:/var/lib/docker/overlay2/4a7767ee605e9d3846f50062d68dbb144b6c872e261ea175128352b6a2008186/diff:/var/lib/docker/overlay2/90cf826a4010a4a3587a817d18da915c42b4f8d827d97ec08235753517cf7cba/diff:/var/lib/docker/overlay2/eaa2a7e56e26bbbbe52325d4dd17430b5f88783e1d7106afef9cb75f9f64673a/diff:/var/lib/docker/overlay2/e79fa306793a060f9fc9b0e6d7b5ef03378cf4fbe65d7c89e8f0ccfcf0562282/diff:/var/lib/docker/overlay2/bba27b2a99740d20b41b7850c0375cecc063e583b9afd93a82a7cf23a44cb8f1/diff:/var/lib/docker/overlay2/6cf665e8f6ea0dc4d08cacc5e06e998a6fd9208a2e8197f3d9a7fc6f28369cbc/diff:/var/lib/docker/overlay2/c7213236b6f74adfad523b3a0745db25c9c3b5aaa7be452e8c7562ac9af55529/diff:/var/lib/docker/overlay2/e6b28f
3ff5c1a7df3787620c5367e76e5d082a2719852854a0059452497aac2d/diff:/var/lib/docker/overlay2/c68b5a0b50ed2410ef2428f9ca77e4af1a8ff0f3c90c1ba30ef5f42e7c2f0fe3/diff:/var/lib/docker/overlay2/3062e3729948d2242933a53d46e139d56542622bc84399d578827874566ec45d/diff:/var/lib/docker/overlay2/5ea2fa0caf63c907fa5f7230a4d86016224b7a8090e21ccd0fafbaacc9b02989/diff:/var/lib/docker/overlay2/d321375c7b5f3519273186dddf87e312e97664c8899baad733ed047158e48167/diff:/var/lib/docker/overlay2/51b4d7bff48b339142e73ea6bf81882193895d7beee21763c05808dc42417831/diff:/var/lib/docker/overlay2/6cc3fdbbe55a5101cad2d2f3a19f351f440ca4ce572bd9590d534a0d4e756871/diff:/var/lib/docker/overlay2/c7b81ca26ce547908b8589973f707ab55de536d55f4e91ff33c4ad44c6335157/diff:/var/lib/docker/overlay2/54518fc6c0f4bd67872c1a8f18d57e28e9977220eb6b786882bdee74547cfd52/diff:/var/lib/docker/overlay2/a70efa960030191dd9226c96dd524ab1af6b4c40f8037297a048af6ce65e7b91/diff:/var/lib/docker/overlay2/4287ba7d9b601768fcd455102b8577d6e47986dacfe67932cb862726d4269593/diff:/var/lib/d
ocker/overlay2/8cc5c99c5858de4fd5685625834a50fc3618c82d09969525ed7b0605000309eb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a6867e09e1ac90259f317e51ca494f1ef69d61895584f8e15269a94eb173771e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a6867e09e1ac90259f317e51ca494f1ef69d61895584f8e15269a94eb173771e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a6867e09e1ac90259f317e51ca494f1ef69d61895584f8e15269a94eb173771e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220602104647-2113",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220602104647-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220602104647-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220602104647-2113",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220602104647-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14771b28999f8216b48c61f4ae2af793192f0ad69fdee3fcbd2e2a27514a1031",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61700"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61701"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61702"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/14771b28999f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "24c52842831d508bb82ed345d5792d77fa8cbb01d7d43acb9575edf591f61168",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "2d5114eaf3d33c2727c4ba12e4dc285212892054552f33143ece82afb1966168",
	                    "EndpointID": "24c52842831d508bb82ed345d5792d77fa8cbb01d7d43acb9575edf591f61168",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220602104647-2113 -n running-upgrade-20220602104647-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220602104647-2113 -n running-upgrade-20220602104647-2113: exit status 6 (433.474818ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 10:47:35.940100    9854 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220602104647-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220602104647-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220602104647-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220602104647-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220602104647-2113: (2.566906456s)
--- FAIL: TestRunningBinaryUpgrade (50.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (306.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220602104828-2113 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0602 10:49:12.602607    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:49:29.028716    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:49:29.035204    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:49:29.046064    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:49:29.066532    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:49:29.107597    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:49:29.188556    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:49:29.350050    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:49:29.670219    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:49:30.311433    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:49:31.592883    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220602104828-2113 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m13.623873117s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220602104828-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubernetes-upgrade-20220602104828-2113 in cluster kubernetes-upgrade-20220602104828-2113
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:48:28.219913   10189 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:48:28.220065   10189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:48:28.220070   10189 out.go:309] Setting ErrFile to fd 2...
	I0602 10:48:28.220074   10189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:48:28.220164   10189 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:48:28.220484   10189 out.go:303] Setting JSON to false
	I0602 10:48:28.236548   10189 start.go:115] hostinfo: {"hostname":"37309.local","uptime":2878,"bootTime":1654189230,"procs":359,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:48:28.236621   10189 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:48:28.258784   10189 out.go:177] * [kubernetes-upgrade-20220602104828-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:48:28.301593   10189 notify.go:193] Checking for updates...
	I0602 10:48:28.323165   10189 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 10:48:28.344506   10189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:48:28.366468   10189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:48:28.388428   10189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:48:28.410432   10189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 10:48:28.433094   10189 config.go:178] Loaded profile config "cert-expiration-20220602104608-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:48:28.433197   10189 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 10:48:28.505313   10189 docker.go:137] docker version: linux-20.10.14
	I0602 10:48:28.505459   10189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:48:28.631388   10189 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:53 SystemTime:2022-06-02 17:48:28.563728871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:48:28.675141   10189 out.go:177] * Using the docker driver based on user configuration
	I0602 10:48:28.696145   10189 start.go:284] selected driver: docker
	I0602 10:48:28.696184   10189 start.go:806] validating driver "docker" against <nil>
	I0602 10:48:28.696213   10189 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 10:48:28.699490   10189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:48:28.825088   10189 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:53 SystemTime:2022-06-02 17:48:28.758123198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:48:28.825193   10189 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 10:48:28.825383   10189 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0602 10:48:28.847058   10189 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 10:48:28.868979   10189 cni.go:95] Creating CNI manager for ""
	I0602 10:48:28.869009   10189 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:48:28.869031   10189 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220602104828-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220602104828-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:48:28.891023   10189 out.go:177] * Starting control plane node kubernetes-upgrade-20220602104828-2113 in cluster kubernetes-upgrade-20220602104828-2113
	I0602 10:48:28.948874   10189 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 10:48:28.970066   10189 out.go:177] * Pulling base image ...
	I0602 10:48:29.045136   10189 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 10:48:29.045189   10189 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 10:48:29.111720   10189 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 10:48:29.111743   10189 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 10:48:29.119913   10189 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 10:48:29.119933   10189 cache.go:57] Caching tarball of preloaded images
	I0602 10:48:29.120233   10189 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 10:48:29.163914   10189 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0602 10:48:29.184886   10189 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0602 10:48:29.280445   10189 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 10:48:33.337499   10189 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0602 10:48:33.337646   10189 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0602 10:48:33.912654   10189 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0602 10:48:33.912738   10189 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/config.json ...
	I0602 10:48:33.912759   10189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/config.json: {Name:mk006bf63118345d79017e917ea1aa2e84a55f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:48:33.913031   10189 cache.go:206] Successfully downloaded all kic artifacts
	I0602 10:48:33.913063   10189 start.go:352] acquiring machines lock for kubernetes-upgrade-20220602104828-2113: {Name:mk4e0f9303d051c78153416e2b9a37ee4f9993ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:48:33.913147   10189 start.go:356] acquired machines lock for "kubernetes-upgrade-20220602104828-2113" in 77.118µs
	I0602 10:48:33.913170   10189 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220602104828-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220602104828
-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Po
rt:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 10:48:33.913211   10189 start.go:131] createHost starting for "" (driver="docker")
	I0602 10:48:33.962899   10189 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0602 10:48:33.963231   10189 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220602104828-2113" (driver="docker")
	I0602 10:48:33.963273   10189 client.go:168] LocalClient.Create starting
	I0602 10:48:33.963409   10189 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 10:48:33.963474   10189 main.go:134] libmachine: Decoding PEM data...
	I0602 10:48:33.963496   10189 main.go:134] libmachine: Parsing certificate...
	I0602 10:48:33.963590   10189 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 10:48:33.963639   10189 main.go:134] libmachine: Decoding PEM data...
	I0602 10:48:33.963656   10189 main.go:134] libmachine: Parsing certificate...
	I0602 10:48:33.964283   10189 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220602104828-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 10:48:34.028259   10189 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220602104828-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 10:48:34.028359   10189 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220602104828-2113] to gather additional debugging logs...
	I0602 10:48:34.028385   10189 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220602104828-2113
	W0602 10:48:34.090415   10189 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220602104828-2113 returned with exit code 1
	I0602 10:48:34.090453   10189 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220602104828-2113]: docker network inspect kubernetes-upgrade-20220602104828-2113: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220602104828-2113
	I0602 10:48:34.090491   10189 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220602104828-2113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220602104828-2113
	
	** /stderr **
	I0602 10:48:34.090579   10189 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 10:48:34.153371   10189 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00063c090] misses:0}
	I0602 10:48:34.153408   10189 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:48:34.153425   10189 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220602104828-2113 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 10:48:34.153505   10189 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220602104828-2113
	W0602 10:48:34.215662   10189 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220602104828-2113 returned with exit code 1
	W0602 10:48:34.215719   10189 network_create.go:107] failed to create docker network kubernetes-upgrade-20220602104828-2113 192.168.49.0/24, will retry: subnet is taken
	I0602 10:48:34.215993   10189 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00063c090] amended:false}} dirty:map[] misses:0}
	I0602 10:48:34.216011   10189 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:48:34.216229   10189 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00063c090] amended:true}} dirty:map[192.168.49.0:0xc00063c090 192.168.58.0:0xc0005f6240] misses:0}
	I0602 10:48:34.216245   10189 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:48:34.216252   10189 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220602104828-2113 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0602 10:48:34.216314   10189 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220602104828-2113
	I0602 10:48:34.309454   10189 network_create.go:99] docker network kubernetes-upgrade-20220602104828-2113 192.168.58.0/24 created
	I0602 10:48:34.309496   10189 kic.go:106] calculated static IP "192.168.58.2" for the "kubernetes-upgrade-20220602104828-2113" container
	I0602 10:48:34.309595   10189 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 10:48:34.376873   10189 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220602104828-2113 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220602104828-2113 --label created_by.minikube.sigs.k8s.io=true
	I0602 10:48:34.439716   10189 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220602104828-2113
	I0602 10:48:34.439850   10189 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220602104828-2113-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220602104828-2113 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220602104828-2113:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 10:48:34.902585   10189 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220602104828-2113
	I0602 10:48:34.902625   10189 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 10:48:34.902638   10189 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 10:48:34.902753   10189 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220602104828-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 10:48:38.734944   10189 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220602104828-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (3.832062587s)
	I0602 10:48:38.734973   10189 kic.go:188] duration metric: took 3.832322 seconds to extract preloaded images to volume
	I0602 10:48:38.735213   10189 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 10:48:38.869964   10189 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220602104828-2113 --name kubernetes-upgrade-20220602104828-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220602104828-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220602104828-2113 --network kubernetes-upgrade-20220602104828-2113 --ip 192.168.58.2 --volume kubernetes-upgrade-20220602104828-2113:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 10:48:39.264388   10189 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220602104828-2113 --format={{.State.Running}}
	I0602 10:48:39.340618   10189 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220602104828-2113 --format={{.State.Status}}
	I0602 10:48:39.419220   10189 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220602104828-2113 stat /var/lib/dpkg/alternatives/iptables
	I0602 10:48:39.542691   10189 oci.go:247] the created container "kubernetes-upgrade-20220602104828-2113" has a running status.
	I0602 10:48:39.542741   10189 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa...
	I0602 10:48:39.673664   10189 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 10:48:39.785037   10189 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220602104828-2113 --format={{.State.Status}}
	I0602 10:48:39.855350   10189 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 10:48:39.855368   10189 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220602104828-2113 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 10:48:39.987517   10189 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220602104828-2113 --format={{.State.Status}}
	I0602 10:48:40.057649   10189 machine.go:88] provisioning docker machine ...
	I0602 10:48:40.057820   10189 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220602104828-2113"
	I0602 10:48:40.057965   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:40.127524   10189 main.go:134] libmachine: Using SSH client type: native
	I0602 10:48:40.127716   10189 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62571 <nil> <nil>}
	I0602 10:48:40.127732   10189 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220602104828-2113 && echo "kubernetes-upgrade-20220602104828-2113" | sudo tee /etc/hostname
	I0602 10:48:40.252125   10189 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220602104828-2113
	
	I0602 10:48:40.252204   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:40.322326   10189 main.go:134] libmachine: Using SSH client type: native
	I0602 10:48:40.322558   10189 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62571 <nil> <nil>}
	I0602 10:48:40.322577   10189 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220602104828-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220602104828-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220602104828-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 10:48:40.438856   10189 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 10:48:40.438874   10189 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 10:48:40.438898   10189 ubuntu.go:177] setting up certificates
	I0602 10:48:40.438905   10189 provision.go:83] configureAuth start
	I0602 10:48:40.438964   10189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:40.508459   10189 provision.go:138] copyHostCerts
	I0602 10:48:40.508536   10189 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 10:48:40.508545   10189 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 10:48:40.508640   10189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 10:48:40.508820   10189 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 10:48:40.508833   10189 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 10:48:40.508891   10189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 10:48:40.509024   10189 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 10:48:40.509030   10189 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 10:48:40.509083   10189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 10:48:40.509188   10189 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220602104828-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220602104828-2113]
	I0602 10:48:40.725881   10189 provision.go:172] copyRemoteCerts
	I0602 10:48:40.725961   10189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 10:48:40.726017   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:40.799499   10189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62571 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:48:40.885271   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 10:48:40.902439   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0602 10:48:40.919392   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 10:48:40.937210   10189 provision.go:86] duration metric: configureAuth took 498.291127ms
	I0602 10:48:40.937224   10189 ubuntu.go:193] setting minikube options for container-runtime
	I0602 10:48:40.937350   10189 config.go:178] Loaded profile config "kubernetes-upgrade-20220602104828-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0602 10:48:40.937401   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:41.009003   10189 main.go:134] libmachine: Using SSH client type: native
	I0602 10:48:41.009240   10189 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62571 <nil> <nil>}
	I0602 10:48:41.009258   10189 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 10:48:41.129115   10189 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 10:48:41.129126   10189 ubuntu.go:71] root file system type: overlay
	I0602 10:48:41.129273   10189 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 10:48:41.129357   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:41.203427   10189 main.go:134] libmachine: Using SSH client type: native
	I0602 10:48:41.203587   10189 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62571 <nil> <nil>}
	I0602 10:48:41.203640   10189 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 10:48:41.327968   10189 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 10:48:41.328061   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:41.398954   10189 main.go:134] libmachine: Using SSH client type: native
	I0602 10:48:41.399129   10189 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 62571 <nil> <nil>}
	I0602 10:48:41.399143   10189 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 10:48:41.995824   10189 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:48:41.335508096 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 10:48:41.995854   10189 machine.go:91] provisioned docker machine in 1.938054837s
	I0602 10:48:41.995861   10189 client.go:171] LocalClient.Create took 8.032555567s
	I0602 10:48:41.995879   10189 start.go:173] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220602104828-2113" took 8.032623575s
	I0602 10:48:41.995889   10189 start.go:306] post-start starting for "kubernetes-upgrade-20220602104828-2113" (driver="docker")
	I0602 10:48:41.995895   10189 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 10:48:41.995956   10189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 10:48:41.996025   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:42.070927   10189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62571 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:48:42.157400   10189 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 10:48:42.161029   10189 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 10:48:42.161046   10189 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 10:48:42.161054   10189 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 10:48:42.161059   10189 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 10:48:42.161068   10189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 10:48:42.161181   10189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 10:48:42.161318   10189 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 10:48:42.161478   10189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 10:48:42.168505   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:48:42.186957   10189 start.go:309] post-start completed in 191.057487ms
	I0602 10:48:42.187462   10189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:42.258037   10189 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/config.json ...
	I0602 10:48:42.258429   10189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:48:42.258477   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:42.330373   10189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62571 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:48:42.415058   10189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:48:42.419488   10189 start.go:134] duration metric: createHost completed in 8.506240497s
	I0602 10:48:42.419509   10189 start.go:81] releasing machines lock for "kubernetes-upgrade-20220602104828-2113", held for 8.506325839s
	I0602 10:48:42.419605   10189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:42.489842   10189 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 10:48:42.489842   10189 ssh_runner.go:195] Run: systemctl --version
	I0602 10:48:42.489918   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:42.489916   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:42.564630   10189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62571 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:48:42.567421   10189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62571 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:48:42.778728   10189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 10:48:42.788442   10189 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:48:42.797996   10189 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 10:48:42.798054   10189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 10:48:42.807690   10189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 10:48:42.820281   10189 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 10:48:42.890008   10189 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 10:48:42.955491   10189 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:48:42.965578   10189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 10:48:43.029888   10189 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 10:48:43.039639   10189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:48:43.075013   10189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:48:43.150172   10189 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0602 10:48:43.150387   10189 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220602104828-2113 dig +short host.docker.internal
	I0602 10:48:43.310356   10189 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 10:48:43.310555   10189 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 10:48:43.314778   10189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 10:48:43.324600   10189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:48:43.395435   10189 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 10:48:43.395501   10189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:48:43.424855   10189 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 10:48:43.424869   10189 docker.go:541] Images already preloaded, skipping extraction
	I0602 10:48:43.424936   10189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:48:43.454227   10189 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 10:48:43.454247   10189 cache_images.go:84] Images are preloaded, skipping loading
	I0602 10:48:43.454330   10189 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 10:48:43.525765   10189 cni.go:95] Creating CNI manager for ""
	I0602 10:48:43.525777   10189 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:48:43.525788   10189 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 10:48:43.525802   10189 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220602104828-2113 NodeName:kubernetes-upgrade-20220602104828-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 10:48:43.525901   10189 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220602104828-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220602104828-2113
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 10:48:43.525973   10189 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220602104828-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220602104828-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 10:48:43.526030   10189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0602 10:48:43.533419   10189 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 10:48:43.533476   10189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 10:48:43.540207   10189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0602 10:48:43.552597   10189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 10:48:43.564812   10189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0602 10:48:43.577254   10189 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 10:48:43.580737   10189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 10:48:43.591468   10189 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113 for IP: 192.168.58.2
	I0602 10:48:43.591577   10189 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 10:48:43.591629   10189 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 10:48:43.591698   10189 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.key
	I0602 10:48:43.591712   10189 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.crt with IP's: []
	I0602 10:48:43.651194   10189 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.crt ...
	I0602 10:48:43.651205   10189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.crt: {Name:mk77ec7e461ac49aadb6a6a071fa83dcef4d41cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:48:43.651518   10189 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.key ...
	I0602 10:48:43.651528   10189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.key: {Name:mk56148a687ecd7b2d774b0474da18da207961fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:48:43.651730   10189 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.key.cee25041
	I0602 10:48:43.651745   10189 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 10:48:43.771340   10189 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.crt.cee25041 ...
	I0602 10:48:43.771352   10189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.crt.cee25041: {Name:mk90e2248946c0473aacc0e4aaa00cb6cadb2103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:48:43.771573   10189 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.key.cee25041 ...
	I0602 10:48:43.771580   10189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.key.cee25041: {Name:mk88b6d3eef07c654c3942d3b9106dd4f0d4ed77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:48:43.771747   10189 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.crt
	I0602 10:48:43.771885   10189 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.key
	I0602 10:48:43.772021   10189 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.key
	I0602 10:48:43.772035   10189 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.crt with IP's: []
	I0602 10:48:43.827985   10189 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.crt ...
	I0602 10:48:43.827995   10189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.crt: {Name:mk527349eba8dc0ea738c65c3ba29f95578bc95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:48:43.828220   10189 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.key ...
	I0602 10:48:43.828230   10189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.key: {Name:mk8b9c7a12add15c96cde1b8b01ce508aabf86b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:48:43.828607   10189 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 10:48:43.828654   10189 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 10:48:43.828663   10189 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 10:48:43.828696   10189 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 10:48:43.828733   10189 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 10:48:43.828759   10189 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 10:48:43.828819   10189 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:48:43.829311   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 10:48:43.847380   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 10:48:43.863913   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 10:48:43.881381   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 10:48:43.898117   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 10:48:43.914814   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 10:48:43.931241   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 10:48:43.947888   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 10:48:43.966429   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 10:48:43.983729   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 10:48:44.000555   10189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 10:48:44.017426   10189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 10:48:44.029828   10189 ssh_runner.go:195] Run: openssl version
	I0602 10:48:44.034935   10189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 10:48:44.043128   10189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 10:48:44.046736   10189 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 10:48:44.046775   10189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 10:48:44.051903   10189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 10:48:44.059618   10189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 10:48:44.067050   10189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:48:44.070991   10189 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:48:44.071039   10189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:48:44.076171   10189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 10:48:44.084148   10189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 10:48:44.091785   10189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 10:48:44.095691   10189 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 10:48:44.095743   10189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 10:48:44.100696   10189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 10:48:44.108164   10189 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220602104828-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220602104828-2113 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:48:44.108264   10189 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 10:48:44.137049   10189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 10:48:44.145093   10189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 10:48:44.152495   10189 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 10:48:44.152542   10189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 10:48:44.159734   10189 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 10:48:44.159757   10189 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 10:48:44.901496   10189 out.go:204]   - Generating certificates and keys ...
	I0602 10:48:47.198001   10189 out.go:204]   - Booting up control plane ...
	W0602 10:50:42.112159   10189 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220602104828-2113 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220602104828-2113 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220602104828-2113 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220602104828-2113 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0602 10:50:42.112202   10189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 10:50:42.540184   10189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 10:50:42.549996   10189 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 10:50:42.550050   10189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 10:50:42.558932   10189 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 10:50:42.558954   10189 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 10:50:43.438669   10189 out.go:204]   - Generating certificates and keys ...
	I0602 10:50:44.249081   10189 out.go:204]   - Booting up control plane ...
	I0602 10:52:39.152285   10189 kubeadm.go:397] StartCluster complete in 3m55.043390971s
	I0602 10:52:39.152371   10189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 10:52:39.181503   10189 logs.go:274] 0 containers: []
	W0602 10:52:39.181515   10189 logs.go:276] No container was found matching "kube-apiserver"
	I0602 10:52:39.181579   10189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 10:52:39.210722   10189 logs.go:274] 0 containers: []
	W0602 10:52:39.210737   10189 logs.go:276] No container was found matching "etcd"
	I0602 10:52:39.210802   10189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 10:52:39.240765   10189 logs.go:274] 0 containers: []
	W0602 10:52:39.240780   10189 logs.go:276] No container was found matching "coredns"
	I0602 10:52:39.240840   10189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 10:52:39.277114   10189 logs.go:274] 0 containers: []
	W0602 10:52:39.277128   10189 logs.go:276] No container was found matching "kube-scheduler"
	I0602 10:52:39.277190   10189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 10:52:39.314341   10189 logs.go:274] 0 containers: []
	W0602 10:52:39.314353   10189 logs.go:276] No container was found matching "kube-proxy"
	I0602 10:52:39.314418   10189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 10:52:39.344934   10189 logs.go:274] 0 containers: []
	W0602 10:52:39.344946   10189 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 10:52:39.345005   10189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 10:52:39.374986   10189 logs.go:274] 0 containers: []
	W0602 10:52:39.375001   10189 logs.go:276] No container was found matching "storage-provisioner"
	I0602 10:52:39.375061   10189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 10:52:39.404327   10189 logs.go:274] 0 containers: []
	W0602 10:52:39.404340   10189 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 10:52:39.404347   10189 logs.go:123] Gathering logs for kubelet ...
	I0602 10:52:39.404353   10189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 10:52:39.444699   10189 logs.go:123] Gathering logs for dmesg ...
	I0602 10:52:39.444711   10189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 10:52:39.457515   10189 logs.go:123] Gathering logs for describe nodes ...
	I0602 10:52:39.457530   10189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 10:52:39.510115   10189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 10:52:39.510128   10189 logs.go:123] Gathering logs for Docker ...
	I0602 10:52:39.510140   10189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 10:52:39.524233   10189 logs.go:123] Gathering logs for container status ...
	I0602 10:52:39.524251   10189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 10:52:41.579643   10189 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055366762s)
	W0602 10:52:41.579769   10189 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0602 10:52:41.579787   10189 out.go:239] * 
	* 
	W0602 10:52:41.579919   10189 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 10:52:41.579935   10189 out.go:239] * 
	* 
	W0602 10:52:41.580540   10189 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 10:52:41.648159   10189 out.go:177] 
	W0602 10:52:41.690180   10189 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 10:52:41.690266   10189 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0602 10:52:41.690326   10189 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0602 10:52:41.732160   10189 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220602104828-2113 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220602104828-2113
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220602104828-2113: (1.646573763s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220602104828-2113 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220602104828-2113 status --format={{.Host}}: exit status 7 (120.477074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220602104828-2113 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220602104828-2113 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker : (25.940415101s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220602104828-2113 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220602104828-2113 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220602104828-2113 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (674.818167ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220602104828-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220602104828-2113
	    minikube start -p kubernetes-upgrade-20220602104828-2113 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220602104828-21132 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6, by running:
	    
	    minikube start -p kubernetes-upgrade-20220602104828-2113 --kubernetes-version=v1.23.6
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220602104828-2113 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220602104828-2113 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker : (14.490823813s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-06-02 10:53:24.823412 -0700 PDT m=+2499.932635653
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220602104828-2113
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220602104828-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b073ea955d00dee87f74fe61b1ed09063a5753fef0f3966ef88eea68f95c7bb",
	        "Created": "2022-06-02T17:48:38.942243898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 143754,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:52:45.022855759Z",
	            "FinishedAt": "2022-06-02T17:52:42.313692266Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/0b073ea955d00dee87f74fe61b1ed09063a5753fef0f3966ef88eea68f95c7bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b073ea955d00dee87f74fe61b1ed09063a5753fef0f3966ef88eea68f95c7bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b073ea955d00dee87f74fe61b1ed09063a5753fef0f3966ef88eea68f95c7bb/hosts",
	        "LogPath": "/var/lib/docker/containers/0b073ea955d00dee87f74fe61b1ed09063a5753fef0f3966ef88eea68f95c7bb/0b073ea955d00dee87f74fe61b1ed09063a5753fef0f3966ef88eea68f95c7bb-json.log",
	        "Name": "/kubernetes-upgrade-20220602104828-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-20220602104828-2113:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220602104828-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f62c704c5ff38b4039c5921eb9e3145227087658ae60e9ffc47274d6ada7af9b-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f62c704c5ff38b4039c5921eb9e3145227087658ae60e9ffc47274d6ada7af9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f62c704c5ff38b4039c5921eb9e3145227087658ae60e9ffc47274d6ada7af9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f62c704c5ff38b4039c5921eb9e3145227087658ae60e9ffc47274d6ada7af9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220602104828-2113",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220602104828-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220602104828-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220602104828-2113",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220602104828-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2266b4f0348b32cb44ff5788cb02d992421d8cf5eeeabc31ab947d0146d7f2f5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64025"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64026"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64028"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64029"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2266b4f0348b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220602104828-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0b073ea955d0",
	                        "kubernetes-upgrade-20220602104828-2113"
	                    ],
	                    "NetworkID": "d65f61836c755ddbc4d8385caa97f4d799cf7a92456ad60b505dc0f8c2390f47",
	                    "EndpointID": "58cab977c2f7255d0cba717a7fa1d097221a66bf2c8b284ee61939fabc5ce4d2",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220602104828-2113 -n kubernetes-upgrade-20220602104828-2113
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220602104828-2113 logs -n 25

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220602104828-2113 logs -n 25: (3.813054882s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                     | cert-expiration-20220602104608-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | cert-expiration-20220602104608-2113    |                                        |         |                |                     |                     |
	|         | --memory=2048 --cert-expiration=3m     |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-options-20220602104618-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | cert-options-20220602104618-2113       |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                                        |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                                        |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                                        |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                                        |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	|         | --apiserver-name=localhost             |                                        |         |                |                     |                     |
	| ssh     | cert-options-20220602104618-2113       | cert-options-20220602104618-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | ssh openssl x509 -text -noout -in      |                                        |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |         |                |                     |                     |
	| ssh     | -p                                     | cert-options-20220602104618-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | cert-options-20220602104618-2113       |                                        |         |                |                     |                     |
	|         | -- sudo cat                            |                                        |         |                |                     |                     |
	|         | /etc/kubernetes/admin.conf             |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-options-20220602104618-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | cert-options-20220602104618-2113       |                                        |         |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220602104647-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:47 PDT | 02 Jun 22 10:47 PDT |
	|         | running-upgrade-20220602104647-2113    |                                        |         |                |                     |                     |
	| delete  | -p                                     | missing-upgrade-20220602104738-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:48 PDT | 02 Jun 22 10:48 PDT |
	|         | missing-upgrade-20220602104738-2113    |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220602104608-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:49 PDT | 02 Jun 22 10:49 PDT |
	|         | cert-expiration-20220602104608-2113    |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --cert-expiration=8760h                |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-expiration-20220602104608-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:49 PDT | 02 Jun 22 10:49 PDT |
	|         | cert-expiration-20220602104608-2113    |                                        |         |                |                     |                     |
	| logs    | -p                                     | stopped-upgrade-20220602104942-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:50 PDT | 02 Jun 22 10:50 PDT |
	|         | stopped-upgrade-20220602104942-2113    |                                        |         |                |                     |                     |
	| delete  | -p                                     | stopped-upgrade-20220602104942-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:50 PDT | 02 Jun 22 10:50 PDT |
	|         | stopped-upgrade-20220602104942-2113    |                                        |         |                |                     |                     |
	| start   | -p pause-20220602105035-2113           | pause-20220602105035-2113              | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:50 PDT | 02 Jun 22 10:51 PDT |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=all --driver=docker             |                                        |         |                |                     |                     |
	| start   | -p pause-20220602105035-2113           | pause-20220602105035-2113              | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:51 PDT | 02 Jun 22 10:51 PDT |
	|         | --alsologtostderr -v=1                 |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| pause   | -p pause-20220602105035-2113           | pause-20220602105035-2113              | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:51 PDT | 02 Jun 22 10:51 PDT |
	|         | --alsologtostderr -v=5                 |                                        |         |                |                     |                     |
	| logs    | pause-20220602105035-2113 logs         | pause-20220602105035-2113              | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:51 PDT | 02 Jun 22 10:52 PDT |
	|         | -n 25                                  |                                        |         |                |                     |                     |
	| delete  | -p pause-20220602105035-2113           | pause-20220602105035-2113              | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:52 PDT | 02 Jun 22 10:52 PDT |
	| stop    | -p                                     | kubernetes-upgrade-20220602104828-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:52 PDT | 02 Jun 22 10:52 PDT |
	|         | kubernetes-upgrade-20220602104828-2113 |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220602105227-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:52 PDT | 02 Jun 22 10:52 PDT |
	|         | NoKubernetes-20220602105227-2113       |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220602105227-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:52 PDT | 02 Jun 22 10:53 PDT |
	|         | NoKubernetes-20220602105227-2113       |                                        |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker        |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220602104828-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:52 PDT | 02 Jun 22 10:53 PDT |
	|         | kubernetes-upgrade-20220602104828-2113 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |                |                     |                     |
	|         |                                        |                                        |         |                |                     |                     |
	| delete  | -p                                     | NoKubernetes-20220602105227-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:53 PDT | 02 Jun 22 10:53 PDT |
	|         | NoKubernetes-20220602105227-2113       |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220602105227-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:53 PDT | 02 Jun 22 10:53 PDT |
	|         | NoKubernetes-20220602105227-2113       |                                        |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker        |                                        |         |                |                     |                     |
	| profile | list                                   | minikube                               | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:53 PDT | 02 Jun 22 10:53 PDT |
	| profile | list --output=json                     | minikube                               | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:53 PDT | 02 Jun 22 10:53 PDT |
	| start   | -p                                     | kubernetes-upgrade-20220602104828-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:53 PDT | 02 Jun 22 10:53 PDT |
	|         | kubernetes-upgrade-20220602104828-2113 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |                |                     |                     |
	|         |                                        |                                        |         |                |                     |                     |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 10:53:12
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 10:53:12.762129   11364 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:53:12.762282   11364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:53:12.762285   11364 out.go:309] Setting ErrFile to fd 2...
	I0602 10:53:12.762289   11364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:53:12.762397   11364 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:53:12.762713   11364 out.go:303] Setting JSON to false
	I0602 10:53:12.778774   11364 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3162,"bootTime":1654189230,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:53:12.778882   11364 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:53:12.800635   11364 out.go:177] * [NoKubernetes-20220602105227-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:53:12.842260   11364 notify.go:193] Checking for updates...
	I0602 10:53:12.863066   11364 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 10:53:12.905277   11364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:53:12.947346   11364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:53:13.005419   11364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:53:13.048343   11364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 10:53:13.070050   11364 config.go:178] Loaded profile config "kubernetes-upgrade-20220602104828-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:53:13.070102   11364 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0602 10:53:13.070145   11364 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 10:53:13.144990   11364 docker.go:137] docker version: linux-20.10.14
	I0602 10:53:13.145131   11364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:53:13.274978   11364 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:53 SystemTime:2022-06-02 17:53:13.215670099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:53:13.296774   11364 out.go:177] * Using the docker driver based on user configuration
	I0602 10:53:13.317508   11364 start.go:284] selected driver: docker
	I0602 10:53:13.317543   11364 start.go:806] validating driver "docker" against <nil>
	I0602 10:53:13.317573   11364 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 10:53:13.317906   11364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:53:13.449707   11364 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:53 SystemTime:2022-06-02 17:53:13.390143315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:53:13.449807   11364 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0602 10:53:13.449815   11364 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0602 10:53:13.449839   11364 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 10:53:13.451856   11364 start_flags.go:373] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0602 10:53:13.451968   11364 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0602 10:53:13.473469   11364 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 10:53:13.494432   11364 cni.go:95] Creating CNI manager for ""
	I0602 10:53:13.494444   11364 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:53:13.494469   11364 start_flags.go:306] config:
	{Name:NoKubernetes-20220602105227-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:NoKubernetes-20220602105227-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:53:13.494549   11364 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0602 10:53:13.520485   11364 out.go:177] * Starting minikube without Kubernetes NoKubernetes-20220602105227-2113 in cluster NoKubernetes-20220602105227-2113
	I0602 10:53:13.562360   11364 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 10:53:13.583470   11364 out.go:177] * Pulling base image ...
	I0602 10:53:13.625731   11364 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0602 10:53:13.625760   11364 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 10:53:13.695950   11364 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 10:53:13.695989   11364 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	W0602 10:53:13.707353   11364 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0602 10:53:13.707545   11364 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/NoKubernetes-20220602105227-2113/config.json ...
	I0602 10:53:13.707582   11364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/NoKubernetes-20220602105227-2113/config.json: {Name:mkad484b251f0e23489fd1dfdbc901d04529e515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:53:13.707829   11364 cache.go:206] Successfully downloaded all kic artifacts
	I0602 10:53:13.707859   11364 start.go:352] acquiring machines lock for NoKubernetes-20220602105227-2113: {Name:mkdcc65b1b0ff40805643a42253d1ed6d52f826a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:53:13.707898   11364 start.go:356] acquired machines lock for "NoKubernetes-20220602105227-2113" in 31.273µs
	I0602 10:53:13.707912   11364 start.go:91] Provisioning new machine with config: &{Name:NoKubernetes-20220602105227-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-20220602105227-2113 Namespa
ce:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 Kubern
etesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 10:53:13.707985   11364 start.go:131] createHost starting for "" (driver="docker")
	I0602 10:53:11.446953   11307 machine.go:88] provisioning docker machine ...
	I0602 10:53:11.447005   11307 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220602104828-2113"
	I0602 10:53:11.447126   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:11.591969   11307 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:11.592191   11307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64025 <nil> <nil>}
	I0602 10:53:11.592203   11307 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220602104828-2113 && echo "kubernetes-upgrade-20220602104828-2113" | sudo tee /etc/hostname
	I0602 10:53:11.719076   11307 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220602104828-2113
	
	I0602 10:53:11.719157   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:11.790615   11307 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:11.790761   11307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64025 <nil> <nil>}
	I0602 10:53:11.790784   11307 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220602104828-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220602104828-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220602104828-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 10:53:11.916551   11307 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 10:53:11.916580   11307 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 10:53:11.916612   11307 ubuntu.go:177] setting up certificates
	I0602 10:53:11.916629   11307 provision.go:83] configureAuth start
	I0602 10:53:11.916706   11307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:11.987899   11307 provision.go:138] copyHostCerts
	I0602 10:53:11.987990   11307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 10:53:11.988000   11307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 10:53:11.988113   11307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 10:53:11.988325   11307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 10:53:11.988334   11307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 10:53:11.988390   11307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 10:53:11.988528   11307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 10:53:11.988534   11307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 10:53:11.988586   11307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 10:53:11.988703   11307 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220602104828-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220602104828-2113]
	I0602 10:53:12.237505   11307 provision.go:172] copyRemoteCerts
	I0602 10:53:12.237558   11307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 10:53:12.237598   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:12.306507   11307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64025 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:53:12.391604   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 10:53:12.410197   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0602 10:53:12.428696   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 10:53:12.448350   11307 provision.go:86] duration metric: configureAuth took 531.703699ms
	I0602 10:53:12.448367   11307 ubuntu.go:193] setting minikube options for container-runtime
	I0602 10:53:12.448495   11307 config.go:178] Loaded profile config "kubernetes-upgrade-20220602104828-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:53:12.448586   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:12.519781   11307 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:12.519951   11307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64025 <nil> <nil>}
	I0602 10:53:12.519963   11307 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 10:53:12.637840   11307 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 10:53:12.637857   11307 ubuntu.go:71] root file system type: overlay
	I0602 10:53:12.638013   11307 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 10:53:12.638092   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:12.723837   11307 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:12.724134   11307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64025 <nil> <nil>}
	I0602 10:53:12.724183   11307 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 10:53:12.850142   11307 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 10:53:12.850224   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:13.072874   11307 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:13.073047   11307 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64025 <nil> <nil>}
	I0602 10:53:13.073065   11307 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 10:53:13.191933   11307 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 10:53:13.191950   11307 machine.go:91] provisioned docker machine in 1.744971819s
	I0602 10:53:13.191960   11307 start.go:306] post-start starting for "kubernetes-upgrade-20220602104828-2113" (driver="docker")
	I0602 10:53:13.191964   11307 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 10:53:13.192020   11307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 10:53:13.192062   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:13.262324   11307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64025 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:53:13.347626   11307 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 10:53:13.351228   11307 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 10:53:13.351246   11307 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 10:53:13.351253   11307 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 10:53:13.351257   11307 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 10:53:13.351267   11307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 10:53:13.351367   11307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 10:53:13.351514   11307 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 10:53:13.351657   11307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 10:53:13.359782   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:53:13.378369   11307 start.go:309] post-start completed in 186.399274ms
	I0602 10:53:13.378450   11307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:53:13.378507   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:13.447444   11307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64025 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:53:13.532000   11307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:53:13.536471   11307 fix.go:57] fixHost completed within 2.222141673s
	I0602 10:53:13.536483   11307 start.go:81] releasing machines lock for "kubernetes-upgrade-20220602104828-2113", held for 2.222175602s
	I0602 10:53:13.536551   11307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:13.638356   11307 ssh_runner.go:195] Run: systemctl --version
	I0602 10:53:13.638365   11307 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 10:53:13.638455   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:13.638550   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:13.754942   11307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64025 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:53:13.754941   11307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64025 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:53:13.966173   11307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 10:53:13.976193   11307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:53:13.986256   11307 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 10:53:13.986317   11307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 10:53:13.996095   11307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 10:53:14.011876   11307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 10:53:14.105959   11307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 10:53:14.191990   11307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:53:14.202555   11307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 10:53:14.281749   11307 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 10:53:14.292773   11307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:53:14.327907   11307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:53:14.415155   11307 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 10:53:14.415262   11307 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220602104828-2113 dig +short host.docker.internal
	I0602 10:53:14.573415   11307 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 10:53:14.573555   11307 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 10:53:14.579533   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:14.649706   11307 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 10:53:14.649768   11307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:53:14.685149   11307 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0602 10:53:14.685165   11307 docker.go:541] Images already preloaded, skipping extraction
	I0602 10:53:14.685237   11307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:53:14.717889   11307 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0602 10:53:14.717911   11307 cache_images.go:84] Images are preloaded, skipping loading
	I0602 10:53:14.717985   11307 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 10:53:14.805067   11307 cni.go:95] Creating CNI manager for ""
	I0602 10:53:14.805081   11307 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:53:14.805113   11307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 10:53:14.805126   11307 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220602104828-2113 NodeName:kubernetes-upgrade-20220602104828-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 10:53:14.805234   11307 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220602104828-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 10:53:14.805318   11307 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220602104828-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220602104828-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 10:53:14.805377   11307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 10:53:14.813324   11307 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 10:53:14.813398   11307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 10:53:14.821894   11307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0602 10:53:14.836597   11307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 10:53:14.850380   11307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2059 bytes)
	I0602 10:53:14.864514   11307 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 10:53:14.868887   11307 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113 for IP: 192.168.58.2
	I0602 10:53:14.869009   11307 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 10:53:14.869062   11307 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 10:53:14.869144   11307 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.key
	I0602 10:53:14.869207   11307 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.key.cee25041
	I0602 10:53:14.869267   11307 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.key
	I0602 10:53:14.869466   11307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 10:53:14.869507   11307 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 10:53:14.869519   11307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 10:53:14.869550   11307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 10:53:14.869586   11307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 10:53:14.869613   11307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 10:53:14.869680   11307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:53:14.870203   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 10:53:14.888656   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 10:53:14.910750   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 10:53:14.929678   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 10:53:14.948370   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 10:53:14.971405   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 10:53:14.993097   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 10:53:15.016463   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 10:53:15.035717   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 10:53:15.060147   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 10:53:15.078328   11307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 10:53:15.097350   11307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 10:53:15.112124   11307 ssh_runner.go:195] Run: openssl version
	I0602 10:53:15.118929   11307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 10:53:15.127910   11307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 10:53:15.132236   11307 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 10:53:15.132294   11307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 10:53:15.140899   11307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 10:53:15.149499   11307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 10:53:15.158787   11307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 10:53:15.162894   11307 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 10:53:15.162944   11307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 10:53:15.168287   11307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 10:53:15.177468   11307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 10:53:15.186763   11307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:53:15.191160   11307 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:53:15.191219   11307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:53:15.196696   11307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 10:53:15.204276   11307 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220602104828-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220602104828-2113 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false}
	I0602 10:53:15.204371   11307 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 10:53:15.235408   11307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 10:53:15.251788   11307 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 10:53:15.251809   11307 kubeadm.go:626] restartCluster start
	I0602 10:53:15.251864   11307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 10:53:15.261674   11307 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 10:53:15.261749   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:15.342703   11307 kubeconfig.go:92] found "kubernetes-upgrade-20220602104828-2113" server: "https://127.0.0.1:64029"
	I0602 10:53:15.343110   11307 kapi.go:59] client config for kubernetes-upgrade-20220602104828-2113: &rest.Config{Host:"https://127.0.0.1:64029", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-
upgrade-20220602104828-2113/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 10:53:15.343661   11307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 10:53:15.357167   11307 api_server.go:165] Checking apiserver status ...
	I0602 10:53:15.357292   11307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 10:53:15.368795   11307 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1496/cgroup
	W0602 10:53:15.378415   11307 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1496/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0602 10:53:15.378429   11307 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64029/healthz ...
	I0602 10:53:13.751191   11364 out.go:204] * Creating docker container (CPUs=2, Memory=5895MB) ...
	I0602 10:53:13.751662   11364 start.go:165] libmachine.API.Create for "NoKubernetes-20220602105227-2113" (driver="docker")
	I0602 10:53:13.751706   11364 client.go:168] LocalClient.Create starting
	I0602 10:53:13.751894   11364 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 10:53:13.751962   11364 main.go:134] libmachine: Decoding PEM data...
	I0602 10:53:13.751985   11364 main.go:134] libmachine: Parsing certificate...
	I0602 10:53:13.752079   11364 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 10:53:13.752124   11364 main.go:134] libmachine: Decoding PEM data...
	I0602 10:53:13.752147   11364 main.go:134] libmachine: Parsing certificate...
	I0602 10:53:13.752912   11364 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220602105227-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 10:53:13.817037   11364 cli_runner.go:211] docker network inspect NoKubernetes-20220602105227-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 10:53:13.817110   11364 network_create.go:272] running [docker network inspect NoKubernetes-20220602105227-2113] to gather additional debugging logs...
	I0602 10:53:13.817124   11364 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220602105227-2113
	W0602 10:53:13.880058   11364 cli_runner.go:211] docker network inspect NoKubernetes-20220602105227-2113 returned with exit code 1
	I0602 10:53:13.880075   11364 network_create.go:275] error running [docker network inspect NoKubernetes-20220602105227-2113]: docker network inspect NoKubernetes-20220602105227-2113: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: NoKubernetes-20220602105227-2113
	I0602 10:53:13.880092   11364 network_create.go:277] output of [docker network inspect NoKubernetes-20220602105227-2113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: NoKubernetes-20220602105227-2113
	
	** /stderr **
	I0602 10:53:13.880156   11364 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 10:53:13.943577   11364 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007504b8] misses:0}
	I0602 10:53:13.943608   11364 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:53:13.943625   11364 network_create.go:115] attempt to create docker network NoKubernetes-20220602105227-2113 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 10:53:13.943675   11364 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220602105227-2113
	I0602 10:53:14.039464   11364 network_create.go:99] docker network NoKubernetes-20220602105227-2113 192.168.49.0/24 created
	I0602 10:53:14.039485   11364 kic.go:106] calculated static IP "192.168.49.2" for the "NoKubernetes-20220602105227-2113" container
	I0602 10:53:14.039581   11364 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 10:53:14.109033   11364 cli_runner.go:164] Run: docker volume create NoKubernetes-20220602105227-2113 --label name.minikube.sigs.k8s.io=NoKubernetes-20220602105227-2113 --label created_by.minikube.sigs.k8s.io=true
	I0602 10:53:14.175568   11364 oci.go:103] Successfully created a docker volume NoKubernetes-20220602105227-2113
	I0602 10:53:14.175650   11364 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-20220602105227-2113-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220602105227-2113 --entrypoint /usr/bin/test -v NoKubernetes-20220602105227-2113:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 10:53:14.758116   11364 oci.go:107] Successfully prepared a docker volume NoKubernetes-20220602105227-2113
	I0602 10:53:14.758151   11364 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0602 10:53:14.758259   11364 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 10:53:14.890400   11364 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-20220602105227-2113 --name NoKubernetes-20220602105227-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220602105227-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-20220602105227-2113 --network NoKubernetes-20220602105227-2113 --ip 192.168.49.2 --volume NoKubernetes-20220602105227-2113:/var --security-opt apparmor=unconfined --memory=5895mb --memory-swap=5895mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 10:53:15.293612   11364 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220602105227-2113 --format={{.State.Running}}
	I0602 10:53:15.372947   11364 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220602105227-2113 --format={{.State.Status}}
	I0602 10:53:15.454263   11364 cli_runner.go:164] Run: docker exec NoKubernetes-20220602105227-2113 stat /var/lib/dpkg/alternatives/iptables
	I0602 10:53:15.647507   11364 oci.go:247] the created container "NoKubernetes-20220602105227-2113" has a running status.
	I0602 10:53:15.647574   11364 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/NoKubernetes-20220602105227-2113/id_rsa...
	I0602 10:53:15.769342   11364 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/NoKubernetes-20220602105227-2113/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 10:53:15.883740   11364 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220602105227-2113 --format={{.State.Status}}
	I0602 10:53:15.954753   11364 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 10:53:15.954766   11364 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-20220602105227-2113 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 10:53:16.083337   11364 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220602105227-2113 --format={{.State.Status}}
	I0602 10:53:16.154209   11364 machine.go:88] provisioning docker machine ...
	I0602 10:53:16.154243   11364 ubuntu.go:169] provisioning hostname "NoKubernetes-20220602105227-2113"
	I0602 10:53:16.154327   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:16.224316   11364 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:16.224496   11364 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64512 <nil> <nil>}
	I0602 10:53:16.224508   11364 main.go:134] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-20220602105227-2113 && echo "NoKubernetes-20220602105227-2113" | sudo tee /etc/hostname
	I0602 10:53:16.350390   11364 main.go:134] libmachine: SSH cmd err, output: <nil>: NoKubernetes-20220602105227-2113
	
	I0602 10:53:16.350466   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:16.420962   11364 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:16.421115   11364 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64512 <nil> <nil>}
	I0602 10:53:16.421137   11364 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-20220602105227-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-20220602105227-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-20220602105227-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 10:53:16.538377   11364 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 10:53:16.538392   11364 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 10:53:16.538415   11364 ubuntu.go:177] setting up certificates
	I0602 10:53:16.538425   11364 provision.go:83] configureAuth start
	I0602 10:53:16.538494   11364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220602105227-2113
	I0602 10:53:16.609067   11364 provision.go:138] copyHostCerts
	I0602 10:53:16.609152   11364 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 10:53:16.609157   11364 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 10:53:16.609264   11364 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 10:53:16.609432   11364 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 10:53:16.609439   11364 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 10:53:16.609499   11364 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 10:53:16.609622   11364 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 10:53:16.609625   11364 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 10:53:16.609682   11364 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 10:53:16.609785   11364 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-20220602105227-2113 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube NoKubernetes-20220602105227-2113]
	I0602 10:53:16.767652   11364 provision.go:172] copyRemoteCerts
	I0602 10:53:16.767711   11364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 10:53:16.767759   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:16.842636   11364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64512 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/NoKubernetes-20220602105227-2113/id_rsa Username:docker}
	I0602 10:53:16.932262   11364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 10:53:16.952108   11364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0602 10:53:16.971999   11364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 10:53:16.989631   11364 provision.go:86] duration metric: configureAuth took 451.191409ms
	I0602 10:53:16.989641   11364 ubuntu.go:193] setting minikube options for container-runtime
	I0602 10:53:16.989790   11364 config.go:178] Loaded profile config "NoKubernetes-20220602105227-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0602 10:53:16.989859   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:17.061191   11364 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:17.061377   11364 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64512 <nil> <nil>}
	I0602 10:53:17.061391   11364 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 10:53:17.174293   11364 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 10:53:17.174305   11364 ubuntu.go:71] root file system type: overlay
	I0602 10:53:17.175141   11364 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 10:53:17.175255   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:17.245798   11364 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:17.245946   11364 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64512 <nil> <nil>}
	I0602 10:53:17.245991   11364 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 10:53:17.369377   11364 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 10:53:17.369455   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:17.440701   11364 main.go:134] libmachine: Using SSH client type: native
	I0602 10:53:17.440847   11364 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64512 <nil> <nil>}
	I0602 10:53:17.440857   11364 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 10:53:18.060322   11364 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:53:17.381433704 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 10:53:18.060337   11364 machine.go:91] provisioned docker machine in 1.906110089s
	I0602 10:53:18.060342   11364 client.go:171] LocalClient.Create took 4.308619384s
	I0602 10:53:18.060358   11364 start.go:173] duration metric: libmachine.API.Create for "NoKubernetes-20220602105227-2113" took 4.30868381s
	I0602 10:53:18.060365   11364 start.go:306] post-start starting for "NoKubernetes-20220602105227-2113" (driver="docker")
	I0602 10:53:18.060368   11364 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 10:53:18.060429   11364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 10:53:18.060479   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:18.134134   11364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64512 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/NoKubernetes-20220602105227-2113/id_rsa Username:docker}
	I0602 10:53:18.222680   11364 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 10:53:18.226951   11364 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 10:53:18.226964   11364 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 10:53:18.226971   11364 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 10:53:18.226978   11364 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 10:53:18.226987   11364 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 10:53:18.227106   11364 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 10:53:18.227240   11364 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 10:53:18.227388   11364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 10:53:18.235135   11364 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:53:18.253315   11364 start.go:309] post-start completed in 192.941136ms
	I0602 10:53:18.253860   11364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220602105227-2113
	I0602 10:53:18.326218   11364 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/NoKubernetes-20220602105227-2113/config.json ...
	I0602 10:53:18.326604   11364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:53:18.326648   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:18.397547   11364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64512 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/NoKubernetes-20220602105227-2113/id_rsa Username:docker}
	I0602 10:53:18.481238   11364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:53:18.485572   11364 start.go:134] duration metric: createHost completed in 4.77756478s
	I0602 10:53:18.485585   11364 start.go:81] releasing machines lock for "NoKubernetes-20220602105227-2113", held for 4.777667107s
	I0602 10:53:18.485657   11364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220602105227-2113
	I0602 10:53:18.559435   11364 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 10:53:18.559519   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:18.560090   11364 ssh_runner.go:195] Run: systemctl --version
	I0602 10:53:18.560229   11364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220602105227-2113
	I0602 10:53:18.641455   11364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64512 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/NoKubernetes-20220602105227-2113/id_rsa Username:docker}
	I0602 10:53:18.643701   11364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64512 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/NoKubernetes-20220602105227-2113/id_rsa Username:docker}
	I0602 10:53:18.728354   11364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 10:53:18.859984   11364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:53:18.870767   11364 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 10:53:18.870834   11364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 10:53:18.881006   11364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 10:53:18.894303   11364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 10:53:18.962910   11364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 10:53:19.030427   11364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:53:19.041405   11364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 10:53:19.109199   11364 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 10:53:19.141371   11364 out.go:177] * Done! minikube is ready without Kubernetes!
	I0602 10:53:19.184483   11364 out.go:177] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube docker-env" to point your docker-cli to the docker inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I0602 10:53:15.387940   11307 api_server.go:266] https://127.0.0.1:64029/healthz returned 200:
	ok
	I0602 10:53:15.421948   11307 system_pods.go:86] 5 kube-system pods found
	I0602 10:53:15.421967   11307 system_pods.go:89] "etcd-kubernetes-upgrade-20220602104828-2113" [5e5179b1-57f3-409d-92dc-21fdb9d4c03e] Pending
	I0602 10:53:15.421975   11307 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-20220602104828-2113" [529a0502-f992-4dd7-a3ee-c6fc1529c772] Pending
	I0602 10:53:15.421999   11307 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-20220602104828-2113" [4a265298-faf7-4916-8c39-1f15f32220b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0602 10:53:15.422017   11307 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-20220602104828-2113" [912cc496-82e8-4bf7-82f2-2105d3a12a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 10:53:15.422025   11307 system_pods.go:89] "storage-provisioner" [7c640e14-f354-4cd8-9a56-664599c4cd3b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0602 10:53:15.422034   11307 kubeadm.go:610] needs reconfigure: missing components: kube-dns, etcd, kube-apiserver, kube-proxy
	I0602 10:53:15.422041   11307 kubeadm.go:1092] stopping kube-system containers ...
	I0602 10:53:15.422093   11307 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 10:53:15.458295   11307 docker.go:442] Stopping containers: [6f730ff9aaad 305a22842312 d31d216215e6 aa5fd388a3dc 4c97a658c725 a7d6454cf8e1 4bf0af80b2a7 4190ccbe4b58]
	I0602 10:53:15.458382   11307 ssh_runner.go:195] Run: docker stop 6f730ff9aaad 305a22842312 d31d216215e6 aa5fd388a3dc 4c97a658c725 a7d6454cf8e1 4bf0af80b2a7 4190ccbe4b58
	I0602 10:53:16.796123   11307 ssh_runner.go:235] Completed: docker stop 6f730ff9aaad 305a22842312 d31d216215e6 aa5fd388a3dc 4c97a658c725 a7d6454cf8e1 4bf0af80b2a7 4190ccbe4b58: (1.337711657s)
	I0602 10:53:16.796198   11307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 10:53:16.861984   11307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 10:53:16.871373   11307 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5759 Jun  2 17:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5791 Jun  2 17:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5955 Jun  2 17:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5743 Jun  2 17:50 /etc/kubernetes/scheduler.conf
	
	I0602 10:53:16.871440   11307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 10:53:16.880699   11307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 10:53:16.889761   11307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 10:53:16.898821   11307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 10:53:16.934541   11307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 10:53:16.943036   11307 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 10:53:16.943055   11307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 10:53:16.996213   11307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 10:53:17.708298   11307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 10:53:17.850749   11307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 10:53:17.900588   11307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 10:53:17.944714   11307 api_server.go:51] waiting for apiserver process to appear ...
	I0602 10:53:17.944779   11307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 10:53:18.457064   11307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 10:53:18.955038   11307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 10:53:18.966840   11307 api_server.go:71] duration metric: took 1.022125135s to wait for apiserver process to appear ...
	I0602 10:53:18.966872   11307 api_server.go:87] waiting for apiserver healthz status ...
	I0602 10:53:18.966883   11307 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64029/healthz ...
	I0602 10:53:21.926957   11307 api_server.go:266] https://127.0.0.1:64029/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0602 10:53:21.926980   11307 api_server.go:102] status: https://127.0.0.1:64029/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0602 10:53:22.429145   11307 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64029/healthz ...
	I0602 10:53:22.437699   11307 api_server.go:266] https://127.0.0.1:64029/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 10:53:22.437714   11307 api_server.go:102] status: https://127.0.0.1:64029/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 10:53:22.927250   11307 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64029/healthz ...
	I0602 10:53:22.932497   11307 api_server.go:266] https://127.0.0.1:64029/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 10:53:22.932511   11307 api_server.go:102] status: https://127.0.0.1:64029/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 10:53:23.429151   11307 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64029/healthz ...
	I0602 10:53:23.436652   11307 api_server.go:266] https://127.0.0.1:64029/healthz returned 200:
	ok
	I0602 10:53:23.443032   11307 api_server.go:140] control plane version: v1.23.6
	I0602 10:53:23.443042   11307 api_server.go:130] duration metric: took 4.476149284s to wait for apiserver health ...
	I0602 10:53:23.443050   11307 cni.go:95] Creating CNI manager for ""
	I0602 10:53:23.443056   11307 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:53:23.443066   11307 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 10:53:23.450081   11307 system_pods.go:59] 5 kube-system pods found
	I0602 10:53:23.450099   11307 system_pods.go:61] "etcd-kubernetes-upgrade-20220602104828-2113" [5e5179b1-57f3-409d-92dc-21fdb9d4c03e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 10:53:23.450106   11307 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220602104828-2113" [529a0502-f992-4dd7-a3ee-c6fc1529c772] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0602 10:53:23.450114   11307 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220602104828-2113" [4a265298-faf7-4916-8c39-1f15f32220b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0602 10:53:23.450120   11307 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220602104828-2113" [912cc496-82e8-4bf7-82f2-2105d3a12a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 10:53:23.450127   11307 system_pods.go:61] "storage-provisioner" [7c640e14-f354-4cd8-9a56-664599c4cd3b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0602 10:53:23.450131   11307 system_pods.go:74] duration metric: took 7.061762ms to wait for pod list to return data ...
	I0602 10:53:23.450141   11307 node_conditions.go:102] verifying NodePressure condition ...
	I0602 10:53:23.452959   11307 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 10:53:23.452975   11307 node_conditions.go:123] node cpu capacity is 6
	I0602 10:53:23.452987   11307 node_conditions.go:105] duration metric: took 2.842559ms to run NodePressure ...
	I0602 10:53:23.452999   11307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 10:53:23.571724   11307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 10:53:23.579027   11307 ops.go:34] apiserver oom_adj: -16
	I0602 10:53:23.579041   11307 kubeadm.go:630] restartCluster took 8.327201186s
	I0602 10:53:23.579046   11307 kubeadm.go:397] StartCluster complete in 8.374749559s
	I0602 10:53:23.579061   11307 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:53:23.579139   11307 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:53:23.579569   11307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:53:23.580096   11307 kapi.go:59] client config for kubernetes-upgrade-20220602104828-2113: &rest.Config{Host:"https://127.0.0.1:64029", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-
upgrade-20220602104828-2113/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 10:53:23.582923   11307 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220602104828-2113" rescaled to 1
	I0602 10:53:23.582963   11307 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 10:53:23.582971   11307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 10:53:23.582992   11307 addons.go:415] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0602 10:53:23.602919   11307 out.go:177] * Verifying Kubernetes components...
	I0602 10:53:23.583126   11307 config.go:178] Loaded profile config "kubernetes-upgrade-20220602104828-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:53:23.602976   11307 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220602104828-2113"
	I0602 10:53:23.602979   11307 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220602104828-2113"
	I0602 10:53:23.634563   11307 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0602 10:53:23.646072   11307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220602104828-2113"
	I0602 10:53:23.646077   11307 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220602104828-2113"
	W0602 10:53:23.646090   11307 addons.go:165] addon storage-provisioner should already be in state true
	I0602 10:53:23.646114   11307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 10:53:23.646149   11307 host.go:66] Checking if "kubernetes-upgrade-20220602104828-2113" exists ...
	I0602 10:53:23.646384   11307 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220602104828-2113 --format={{.State.Status}}
	I0602 10:53:23.647122   11307 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220602104828-2113 --format={{.State.Status}}
	I0602 10:53:23.659527   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:23.731224   11307 kapi.go:59] client config for kubernetes-upgrade-20220602104828-2113: &rest.Config{Host:"https://127.0.0.1:64029", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-upgrade-20220602104828-2113/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubernetes-
upgrade-20220602104828-2113/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 10:53:23.759374   11307 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 10:53:23.738660   11307 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220602104828-2113"
	W0602 10:53:23.781347   11307 addons.go:165] addon default-storageclass should already be in state true
	I0602 10:53:23.781388   11307 host.go:66] Checking if "kubernetes-upgrade-20220602104828-2113" exists ...
	I0602 10:53:23.781521   11307 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 10:53:23.781540   11307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 10:53:23.781657   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:23.783451   11307 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220602104828-2113 --format={{.State.Status}}
	I0602 10:53:23.785575   11307 api_server.go:51] waiting for apiserver process to appear ...
	I0602 10:53:23.786121   11307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 10:53:23.797440   11307 api_server.go:71] duration metric: took 214.456015ms to wait for apiserver process to appear ...
	I0602 10:53:23.797463   11307 api_server.go:87] waiting for apiserver healthz status ...
	I0602 10:53:23.797478   11307 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64029/healthz ...
	I0602 10:53:23.803992   11307 api_server.go:266] https://127.0.0.1:64029/healthz returned 200:
	ok
	I0602 10:53:23.805491   11307 api_server.go:140] control plane version: v1.23.6
	I0602 10:53:23.805503   11307 api_server.go:130] duration metric: took 8.035863ms to wait for apiserver health ...
	I0602 10:53:23.805509   11307 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 10:53:23.810963   11307 system_pods.go:59] 5 kube-system pods found
	I0602 10:53:23.810981   11307 system_pods.go:61] "etcd-kubernetes-upgrade-20220602104828-2113" [5e5179b1-57f3-409d-92dc-21fdb9d4c03e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 10:53:23.810994   11307 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220602104828-2113" [529a0502-f992-4dd7-a3ee-c6fc1529c772] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0602 10:53:23.811003   11307 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220602104828-2113" [4a265298-faf7-4916-8c39-1f15f32220b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0602 10:53:23.811014   11307 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220602104828-2113" [912cc496-82e8-4bf7-82f2-2105d3a12a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 10:53:23.811021   11307 system_pods.go:61] "storage-provisioner" [7c640e14-f354-4cd8-9a56-664599c4cd3b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0602 10:53:23.811028   11307 system_pods.go:74] duration metric: took 5.515177ms to wait for pod list to return data ...
	I0602 10:53:23.811035   11307 kubeadm.go:572] duration metric: took 228.057151ms to wait for : map[apiserver:true system_pods:true] ...
	I0602 10:53:23.811043   11307 node_conditions.go:102] verifying NodePressure condition ...
	I0602 10:53:23.814329   11307 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 10:53:23.814347   11307 node_conditions.go:123] node cpu capacity is 6
	I0602 10:53:23.814358   11307 node_conditions.go:105] duration metric: took 3.310731ms to run NodePressure ...
	I0602 10:53:23.814370   11307 start.go:213] waiting for startup goroutines ...
	I0602 10:53:23.868180   11307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64025 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:53:23.868893   11307 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 10:53:23.868910   11307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 10:53:23.869005   11307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602104828-2113
	I0602 10:53:23.945774   11307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64025 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kubernetes-upgrade-20220602104828-2113/id_rsa Username:docker}
	I0602 10:53:23.966694   11307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 10:53:24.041795   11307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 10:53:24.626679   11307 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0602 10:53:24.668463   11307 addons.go:417] enableAddons completed in 1.085442712s
	I0602 10:53:24.698414   11307 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 10:53:24.778412   11307 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220602104828-2113" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 17:52:45 UTC, end at Thu 2022-06-02 17:53:26 UTC. --
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.595773704Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.595806427Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.595822967Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.595830323Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.596858767Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.596890838Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.596903145Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.596910454Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.634024766Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.675798055Z" level=info msg="Loading containers: start."
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.749287642Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.778669498Z" level=info msg="Loading containers: done."
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.787247585Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.787312574Z" level=info msg="Daemon has completed initialization"
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 systemd[1]: Started Docker Application Container Engine.
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.810556808Z" level=info msg="API listen on [::]:2376"
	Jun 02 17:52:58 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:52:58.813085269Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 02 17:53:15 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:53:15.581210187Z" level=info msg="ignoring event" container=4bf0af80b2a7238ebb111144973982aeffb9c16b042879ac9fa1ac1a2a6ebbd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:53:15 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:53:15.584510141Z" level=info msg="ignoring event" container=4c97a658c72556d19fd56fedb65b78b6ad46f1e1ad363c7e8d99e8daba96e8c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:53:15 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:53:15.603465227Z" level=info msg="ignoring event" container=d31d216215e6df0147fb00e88afa2268a70b74a29b627aedbe7dfe9820755d75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:53:15 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:53:15.647154324Z" level=info msg="ignoring event" container=a7d6454cf8e14441fa3017ce82529263ce4fda140f333a1f577825de5e65df94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:53:15 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:53:15.647300272Z" level=info msg="ignoring event" container=4190ccbe4b58dbe8800ac17eb91c501a16daa0cb6bab252cca545a9c1a15ef52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:53:15 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:53:15.653087201Z" level=info msg="ignoring event" container=305a228423128ea01869205bf8e1a1cc1dca83cb467cbb68dfdb6c1bd103ae95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:53:16 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:53:16.769811735Z" level=info msg="ignoring event" container=6f730ff9aaade0466dc8c5639f068c40397080a1a0b9e124a0202f4ae0d51bd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:53:16 kubernetes-upgrade-20220602104828-2113 dockerd[523]: time="2022-06-02T17:53:16.769859563Z" level=info msg="ignoring event" container=aa5fd388a3dc5f78dc1c46a7ff72ce296cdfb07bd9390a017a1d7c0783587b52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	99e51673b47b2       8fa62c12256df       8 seconds ago       Running             kube-apiserver            1                   db5bc79c98809
	252254e10d462       25f8c7f3da61c       8 seconds ago       Running             etcd                      1                   3c2d27be48c32
	b22fa12de394f       595f327f224a4       8 seconds ago       Running             kube-scheduler            1                   1e38760a2b2da
	5b9aa337d7894       df7b72818ad2e       8 seconds ago       Running             kube-controller-manager   1                   e2662a1d781f4
	6f730ff9aaade       8fa62c12256df       25 seconds ago      Exited              kube-apiserver            0                   a7d6454cf8e14
	305a228423128       df7b72818ad2e       25 seconds ago      Exited              kube-controller-manager   0                   4c97a658c7255
	d31d216215e6d       25f8c7f3da61c       25 seconds ago      Exited              etcd                      0                   4bf0af80b2a72
	aa5fd388a3dc5       595f327f224a4       25 seconds ago      Exited              kube-scheduler            0                   4190ccbe4b58d
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220602104828-2113
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220602104828-2113
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:53:04 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220602104828-2113
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 17:53:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 17:53:22 +0000   Thu, 02 Jun 2022 17:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 17:53:22 +0000   Thu, 02 Jun 2022 17:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 17:53:22 +0000   Thu, 02 Jun 2022 17:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 17:53:22 +0000   Thu, 02 Jun 2022 17:53:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    kubernetes-upgrade-20220602104828-2113
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                2a7447ea-3cd6-4a21-b532-20bbba104972
	  Boot ID:                    a475dd08-72ba-4c6d-89c1-75a58adc3783
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-20220602104828-2113                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         16s
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220602104828-2113             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220602104828-2113    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220602104828-2113             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 26s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet  Node kubernetes-upgrade-20220602104828-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet  Node kubernetes-upgrade-20220602104828-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet  Node kubernetes-upgrade-20220602104828-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x5 over 8s)    kubelet  Node kubernetes-upgrade-20220602104828-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x4 over 8s)    kubelet  Node kubernetes-upgrade-20220602104828-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x4 over 8s)    kubelet  Node kubernetes-upgrade-20220602104828-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.001438] FS-Cache: O-key=[8] '8212da0200000000'
	[  +0.001117] FS-Cache: N-cookie c=0000000016a13b2d [p=000000009900b4c5 fl=2 nc=0 na=1]
	[  +0.001771] FS-Cache: N-cookie d=00000000c757acde n=00000000578e5195
	[  +0.001432] FS-Cache: N-key=[8] '8212da0200000000'
	[  +0.001802] FS-Cache: Duplicate cookie detected
	[  +0.000997] FS-Cache: O-cookie c=00000000d3155457 [p=000000009900b4c5 fl=226 nc=0 na=1]
	[  +0.001776] FS-Cache: O-cookie d=00000000c757acde n=00000000a75076f7
	[  +0.001441] FS-Cache: O-key=[8] '8212da0200000000'
	[  +0.001141] FS-Cache: N-cookie c=0000000016a13b2d [p=000000009900b4c5 fl=2 nc=0 na=1]
	[  +0.001768] FS-Cache: N-cookie d=00000000c757acde n=00000000de0e4618
	[  +0.001450] FS-Cache: N-key=[8] '8212da0200000000'
	[  +3.961276] FS-Cache: Duplicate cookie detected
	[  +0.001030] FS-Cache: O-cookie c=00000000422eca30 [p=000000009900b4c5 fl=226 nc=0 na=1]
	[  +0.001768] FS-Cache: O-cookie d=00000000c757acde n=00000000a474c9b7
	[  +0.001460] FS-Cache: O-key=[8] '8112da0200000000'
	[  +0.001115] FS-Cache: N-cookie c=00000000714715d4 [p=000000009900b4c5 fl=2 nc=0 na=1]
	[  +0.001757] FS-Cache: N-cookie d=00000000c757acde n=00000000de0e4618
	[  +0.001432] FS-Cache: N-key=[8] '8112da0200000000'
	[  +0.427724] FS-Cache: Duplicate cookie detected
	[  +0.001023] FS-Cache: O-cookie c=00000000f7889376 [p=000000009900b4c5 fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=00000000c757acde n=0000000016dfe9c6
	[  +0.001461] FS-Cache: O-key=[8] '8b12da0200000000'
	[  +0.001088] FS-Cache: N-cookie c=00000000c2958e6c [p=000000009900b4c5 fl=2 nc=0 na=1]
	[  +0.001745] FS-Cache: N-cookie d=00000000c757acde n=0000000017be1b1a
	[  +0.001394] FS-Cache: N-key=[8] '8b12da0200000000'
	
	* 
	* ==> etcd [252254e10d46] <==
	* {"level":"info","ts":"2022-06-02T17:53:18.856Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-02T17:53:18.863Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-06-02T17:53:18.864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-06-02T17:53:18.864Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-02T17:53:18.864Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:53:18.864Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:53:18.864Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T17:53:18.864Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T17:53:18.864Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T17:53:18.865Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T17:53:18.865Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T17:53:20.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-02T17:53:20.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:53:20.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T17:53:20.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-06-02T17:53:20.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-02T17:53:20.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-06-02T17:53:20.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-02T17:53:20.554Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:kubernetes-upgrade-20220602104828-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:53:20.554Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:53:20.554Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:53:20.555Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:53:20.555Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:53:20.555Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:53:20.555Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> etcd [d31d216215e6] <==
	* {"level":"info","ts":"2022-06-02T17:53:01.555Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:53:01.589Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:53:01.589Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:53:01.589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:53:01.589Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:53:01.590Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"warn","ts":"2022-06-02T17:53:08.871Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.978164ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/expand-controller\" ","response":"range_response_count:1 size:201"}
	{"level":"info","ts":"2022-06-02T17:53:08.871Z","caller":"traceutil/trace.go:171","msg":"trace[729637370] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/expand-controller; range_end:; response_count:1; response_revision:307; }","duration":"104.122986ms","start":"2022-06-02T17:53:08.767Z","end":"2022-06-02T17:53:08.871Z","steps":["trace[729637370] 'agreement among raft nodes before linearized reading'  (duration: 29.733557ms)","trace[729637370] 'range keys from in-memory index tree'  (duration: 74.204225ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:53:10.146Z","caller":"traceutil/trace.go:171","msg":"trace[588884064] linearizableReadLoop","detail":"{readStateIndex:350; appliedIndex:349; }","duration":"130.248111ms","start":"2022-06-02T17:53:10.015Z","end":"2022-06-02T17:53:10.146Z","steps":["trace[588884064] 'read index received'  (duration: 53.526074ms)","trace[588884064] 'applied index is now lower than readState.Index'  (duration: 76.721325ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T17:53:10.146Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"131.016914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:353"}
	{"level":"info","ts":"2022-06-02T17:53:10.146Z","caller":"traceutil/trace.go:171","msg":"trace[236072786] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"150.974899ms","start":"2022-06-02T17:53:09.995Z","end":"2022-06-02T17:53:10.146Z","steps":["trace[236072786] 'process raft request'  (duration: 73.510312ms)","trace[236072786] 'compare'  (duration: 76.566495ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:53:10.146Z","caller":"traceutil/trace.go:171","msg":"trace[1182255715] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:345; }","duration":"131.065155ms","start":"2022-06-02T17:53:10.015Z","end":"2022-06-02T17:53:10.146Z","steps":["trace[1182255715] 'agreement among raft nodes before linearized reading'  (duration: 130.759899ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T17:53:10.339Z","caller":"traceutil/trace.go:171","msg":"trace[433278077] linearizableReadLoop","detail":"{readStateIndex:354; appliedIndex:353; }","duration":"112.180798ms","start":"2022-06-02T17:53:10.227Z","end":"2022-06-02T17:53:10.339Z","steps":["trace[433278077] 'read index received'  (duration: 111.638443ms)","trace[433278077] 'applied index is now lower than readState.Index'  (duration: 541.716µs)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:53:10.339Z","caller":"traceutil/trace.go:171","msg":"trace[919478264] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"113.330363ms","start":"2022-06-02T17:53:10.226Z","end":"2022-06-02T17:53:10.339Z","steps":["trace[919478264] 'process raft request'  (duration: 112.664121ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T17:53:10.339Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.635932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:210"}
	{"level":"info","ts":"2022-06-02T17:53:10.340Z","caller":"traceutil/trace.go:171","msg":"trace[702494285] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:349; }","duration":"113.451317ms","start":"2022-06-02T17:53:10.227Z","end":"2022-06-02T17:53:10.340Z","steps":["trace[702494285] 'agreement among raft nodes before linearized reading'  (duration: 112.494248ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T17:53:10.687Z","caller":"traceutil/trace.go:171","msg":"trace[1568172123] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"131.615543ms","start":"2022-06-02T17:53:10.555Z","end":"2022-06-02T17:53:10.687Z","steps":["trace[1568172123] 'process raft request'  (duration: 72.686641ms)","trace[1568172123] 'compare'  (duration: 58.774706ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:53:15.535Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-02T17:53:15.535Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"kubernetes-upgrade-20220602104828-2113","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/06/02 17:53:15 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 17:53:15 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-02T17:53:15.547Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-06-02T17:53:15.548Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T17:53:15.549Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T17:53:15.549Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"kubernetes-upgrade-20220602104828-2113","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  17:53:27 up 41 min,  0 users,  load average: 1.76, 1.67, 1.36
	Linux kubernetes-upgrade-20220602104828-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [6f730ff9aaad] <==
	* W0602 17:53:16.541278       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.541266       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.541300       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.541321       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.541325       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.541309       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.541305       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.541348       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.541479       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.541647       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.542855       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.542940       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.542960       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.542963       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.542984       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.543042       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.542985       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.543081       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.542987       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.542993       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.543095       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.542998       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.543055       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.543457       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 17:53:16.544669       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [99e51673b47b] <==
	* I0602 17:53:21.919337       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0602 17:53:21.919344       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0602 17:53:21.921461       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0602 17:53:21.921512       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0602 17:53:21.921537       1 controller.go:83] Starting OpenAPI AggregationController
	I0602 17:53:21.924148       1 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0602 17:53:21.953193       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0602 17:53:21.953470       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0602 17:53:21.975623       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0602 17:53:22.042947       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0602 17:53:22.043089       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0602 17:53:22.043855       1 cache.go:39] Caches are synced for autoregister controller
	I0602 17:53:22.044087       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0602 17:53:22.044212       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0602 17:53:22.044247       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0602 17:53:22.052456       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 17:53:22.080837       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 17:53:22.920044       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0602 17:53:22.920210       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 17:53:22.924335       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0602 17:53:23.539958       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 17:53:23.547039       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 17:53:23.568892       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 17:53:23.579033       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 17:53:23.583665       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [305a22842312] <==
	* I0602 17:53:09.068826       1 ttlafterfinished_controller.go:109] Starting TTL after finished controller
	I0602 17:53:09.068832       1 shared_informer.go:240] Waiting for caches to sync for TTL after finished
	I0602 17:53:09.369944       1 garbagecollector.go:146] Starting garbage collector controller
	I0602 17:53:09.369988       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0602 17:53:09.370007       1 graph_builder.go:289] GraphBuilder running
	I0602 17:53:09.370042       1 controllermanager.go:605] Started "garbagecollector"
	I0602 17:53:09.567226       1 controllermanager.go:605] Started "job"
	I0602 17:53:09.567292       1 job_controller.go:184] Starting job controller
	I0602 17:53:09.567297       1 shared_informer.go:240] Waiting for caches to sync for job
	I0602 17:53:09.619211       1 controllermanager.go:605] Started "csrapproving"
	I0602 17:53:09.619247       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0602 17:53:09.619305       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0602 17:53:09.770382       1 controllermanager.go:605] Started "persistentvolume-binder"
	I0602 17:53:09.770422       1 pv_controller_base.go:310] Starting persistent volume controller
	I0602 17:53:09.770431       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0602 17:53:09.954178       1 controllermanager.go:605] Started "pvc-protection"
	I0602 17:53:09.954232       1 pvc_protection_controller.go:103] "Starting PVC protection controller"
	I0602 17:53:09.954238       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
	I0602 17:53:10.185824       1 controllermanager.go:605] Started "serviceaccount"
	I0602 17:53:10.185973       1 serviceaccounts_controller.go:117] Starting service account controller
	I0602 17:53:10.186008       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0602 17:53:10.226373       1 controllermanager.go:605] Started "replicaset"
	I0602 17:53:10.226402       1 replica_set.go:186] Starting replicaset controller
	I0602 17:53:10.226407       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0602 17:53:10.342448       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [5b9aa337d789] <==
	* I0602 17:53:24.173779       1 controllermanager.go:605] Started "csrapproving"
	I0602 17:53:24.173919       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0602 17:53:24.173927       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	I0602 17:53:24.175429       1 controllermanager.go:605] Started "tokencleaner"
	I0602 17:53:24.175542       1 tokencleaner.go:118] Starting token cleaner controller
	I0602 17:53:24.175547       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0602 17:53:24.175552       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0602 17:53:24.177123       1 controllermanager.go:605] Started "ephemeral-volume"
	I0602 17:53:24.177197       1 controller.go:170] Starting ephemeral volume controller
	I0602 17:53:24.177205       1 shared_informer.go:240] Waiting for caches to sync for ephemeral
	I0602 17:53:24.178780       1 controllermanager.go:605] Started "csrcleaner"
	I0602 17:53:24.178944       1 cleaner.go:82] Starting CSR cleaner controller
	I0602 17:53:24.180586       1 controllermanager.go:605] Started "persistentvolume-binder"
	I0602 17:53:24.180644       1 pv_controller_base.go:310] Starting persistent volume controller
	I0602 17:53:24.180651       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0602 17:53:24.182434       1 controllermanager.go:605] Started "root-ca-cert-publisher"
	I0602 17:53:24.182589       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0602 17:53:24.182598       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0602 17:53:24.184297       1 controllermanager.go:605] Started "endpointslice"
	I0602 17:53:24.184472       1 endpointslice_controller.go:257] Starting endpoint slice controller
	I0602 17:53:24.184479       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0602 17:53:24.185986       1 controllermanager.go:605] Started "replicaset"
	I0602 17:53:24.186033       1 replica_set.go:186] Starting replicaset controller
	I0602 17:53:24.186184       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0602 17:53:24.188019       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [aa5fd388a3dc] <==
	* E0602 17:53:04.054624       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:53:04.054887       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 17:53:04.054933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:53:04.900880       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0602 17:53:04.900931       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0602 17:53:05.030821       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 17:53:05.030837       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 17:53:05.083242       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 17:53:05.083278       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:53:05.085111       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 17:53:05.085148       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 17:53:05.097070       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 17:53:05.097128       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 17:53:05.098996       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 17:53:05.099029       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:53:05.108058       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0602 17:53:05.108119       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0602 17:53:05.218447       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 17:53:05.218481       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 17:53:06.353408       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0602 17:53:07.249533       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0602 17:53:07.296086       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0602 17:53:15.530315       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:53:15.530887       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0602 17:53:15.531937       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [b22fa12de394] <==
	* I0602 17:53:19.651892       1 serving.go:348] Generated self-signed cert in-memory
	I0602 17:53:21.974625       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0602 17:53:21.977122       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0602 17:53:21.977166       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:53:21.977176       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0602 17:53:21.977181       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0602 17:53:21.977133       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0602 17:53:21.977776       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0602 17:53:21.978846       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0602 17:53:21.978921       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0602 17:53:22.078072       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0602 17:53:22.078305       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I0602 17:53:22.079103       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 17:52:45 UTC, end at Thu 2022-06-02 17:53:29 UTC. --
	Jun 02 17:53:19 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:19.862429    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:19 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:19.963068    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.064112    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.164429    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.265428    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.365749    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.467271    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.567919    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.668856    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.769339    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.870349    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:20 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:20.970860    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:21 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:21.072274    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:21 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:21.173440    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:21 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:21.273738    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:21 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:21.374215    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:21 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:21.474317    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:21 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:21.574667    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:21 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:21.675297    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:21 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:21.776038    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:21 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: E0602 17:53:21.877147    2622 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220602104828-2113\" not found"
	Jun 02 17:53:22 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: I0602 17:53:22.042936    2622 apiserver.go:52] "Watching apiserver"
	Jun 02 17:53:22 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: I0602 17:53:22.052360    2622 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220602104828-2113"
	Jun 02 17:53:22 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: I0602 17:53:22.052462    2622 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220602104828-2113"
	Jun 02 17:53:22 kubernetes-upgrade-20220602104828-2113 kubelet[2622]: I0602 17:53:22.079904    2622 reconciler.go:157] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220602104828-2113 -n kubernetes-upgrade-20220602104828-2113
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220602104828-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context kubernetes-upgrade-20220602104828-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.602740696s)
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220602104828-2113 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220602104828-2113 describe pod storage-provisioner: exit status 1 (49.091581ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220602104828-2113 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220602104828-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220602104828-2113

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220602104828-2113: (3.155494418s)
--- FAIL: TestKubernetesUpgrade (306.35s)

                                                
                                    
x
+
TestMissingContainerUpgrade (49.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.4081105247.exe start -p missing-upgrade-20220602104738-2113 --memory=2200 --driver=docker 
E0602 10:48:01.295795    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.4081105247.exe start -p missing-upgrade-20220602104738-2113 --memory=2200 --driver=docker : exit status 78 (34.235724434s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220602104738-2113] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220602104738-2113
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220602104738-2113" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.77 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 39.08 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 62.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 98.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 121.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 210.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 232.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 254.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 276.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 298.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 341.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 363.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 407.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 452.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 541.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:47:55.035596234 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220602104738-2113" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:48:11.384442094 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.4081105247.exe start -p missing-upgrade-20220602104738-2113 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.4081105247.exe start -p missing-upgrade-20220602104738-2113 --memory=2200 --driver=docker : exit status 70 (4.237276876s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220602104738-2113] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220602104738-2113
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220602104738-2113" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.4081105247.exe start -p missing-upgrade-20220602104738-2113 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.4081105247.exe start -p missing-upgrade-20220602104738-2113 --memory=2200 --driver=docker : exit status 70 (4.305953903s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220602104738-2113] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220602104738-2113
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220602104738-2113" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-06-02 10:48:25.116564 -0700 PDT m=+2200.226727109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220602104738-2113
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220602104738-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73c7525e723ccac67f8a21f01700ff670f79df80d23f1c61fc159142c6c7f3cf",
	        "Created": "2022-06-02T17:48:03.237410255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 126984,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:48:03.471338323Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/73c7525e723ccac67f8a21f01700ff670f79df80d23f1c61fc159142c6c7f3cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73c7525e723ccac67f8a21f01700ff670f79df80d23f1c61fc159142c6c7f3cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/73c7525e723ccac67f8a21f01700ff670f79df80d23f1c61fc159142c6c7f3cf/hosts",
	        "LogPath": "/var/lib/docker/containers/73c7525e723ccac67f8a21f01700ff670f79df80d23f1c61fc159142c6c7f3cf/73c7525e723ccac67f8a21f01700ff670f79df80d23f1c61fc159142c6c7f3cf-json.log",
	        "Name": "/missing-upgrade-20220602104738-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20220602104738-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/033a96d5d9394785dfc6f134805ed7c1422c6074a0bb7a8c8e5db5de088d22b0-init/diff:/var/lib/docker/overlay2/68730985f7cfd3b645dffaaf625a84e0f45a2e522a7bbd35c74f3e961455c955/diff:/var/lib/docker/overlay2/086a9a5d11913cdd684dceb8ac095d883dd96aeffd0e2f279790b7c3992d505d/diff:/var/lib/docker/overlay2/4a7767ee605e9d3846f50062d68dbb144b6c872e261ea175128352b6a2008186/diff:/var/lib/docker/overlay2/90cf826a4010a4a3587a817d18da915c42b4f8d827d97ec08235753517cf7cba/diff:/var/lib/docker/overlay2/eaa2a7e56e26bbbbe52325d4dd17430b5f88783e1d7106afef9cb75f9f64673a/diff:/var/lib/docker/overlay2/e79fa306793a060f9fc9b0e6d7b5ef03378cf4fbe65d7c89e8f0ccfcf0562282/diff:/var/lib/docker/overlay2/bba27b2a99740d20b41b7850c0375cecc063e583b9afd93a82a7cf23a44cb8f1/diff:/var/lib/docker/overlay2/6cf665e8f6ea0dc4d08cacc5e06e998a6fd9208a2e8197f3d9a7fc6f28369cbc/diff:/var/lib/docker/overlay2/c7213236b6f74adfad523b3a0745db25c9c3b5aaa7be452e8c7562ac9af55529/diff:/var/lib/docker/overlay2/e6b28f
3ff5c1a7df3787620c5367e76e5d082a2719852854a0059452497aac2d/diff:/var/lib/docker/overlay2/c68b5a0b50ed2410ef2428f9ca77e4af1a8ff0f3c90c1ba30ef5f42e7c2f0fe3/diff:/var/lib/docker/overlay2/3062e3729948d2242933a53d46e139d56542622bc84399d578827874566ec45d/diff:/var/lib/docker/overlay2/5ea2fa0caf63c907fa5f7230a4d86016224b7a8090e21ccd0fafbaacc9b02989/diff:/var/lib/docker/overlay2/d321375c7b5f3519273186dddf87e312e97664c8899baad733ed047158e48167/diff:/var/lib/docker/overlay2/51b4d7bff48b339142e73ea6bf81882193895d7beee21763c05808dc42417831/diff:/var/lib/docker/overlay2/6cc3fdbbe55a5101cad2d2f3a19f351f440ca4ce572bd9590d534a0d4e756871/diff:/var/lib/docker/overlay2/c7b81ca26ce547908b8589973f707ab55de536d55f4e91ff33c4ad44c6335157/diff:/var/lib/docker/overlay2/54518fc6c0f4bd67872c1a8f18d57e28e9977220eb6b786882bdee74547cfd52/diff:/var/lib/docker/overlay2/a70efa960030191dd9226c96dd524ab1af6b4c40f8037297a048af6ce65e7b91/diff:/var/lib/docker/overlay2/4287ba7d9b601768fcd455102b8577d6e47986dacfe67932cb862726d4269593/diff:/var/lib/d
ocker/overlay2/8cc5c99c5858de4fd5685625834a50fc3618c82d09969525ed7b0605000309eb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/033a96d5d9394785dfc6f134805ed7c1422c6074a0bb7a8c8e5db5de088d22b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/033a96d5d9394785dfc6f134805ed7c1422c6074a0bb7a8c8e5db5de088d22b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/033a96d5d9394785dfc6f134805ed7c1422c6074a0bb7a8c8e5db5de088d22b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220602104738-2113",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220602104738-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220602104738-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220602104738-2113",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220602104738-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5ad7d3c3fa57578cb14447c637986e9f2e81f6909a9dc84ca74b7fe5849789ce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62174"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62175"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5ad7d3c3fa57",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "1895c881855c5e87f6c64de4c5f32b7771424de7bd757da7eacfc00664c50d20",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "2d5114eaf3d33c2727c4ba12e4dc285212892054552f33143ece82afb1966168",
	                    "EndpointID": "1895c881855c5e87f6c64de4c5f32b7771424de7bd757da7eacfc00664c50d20",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220602104738-2113 -n missing-upgrade-20220602104738-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220602104738-2113 -n missing-upgrade-20220602104738-2113: exit status 6 (430.343753ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 10:48:25.610197   10151 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220602104738-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220602104738-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220602104738-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220602104738-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220602104738-2113: (2.550700373s)
--- FAIL: TestMissingContainerUpgrade (49.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (46.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1871654089.exe start -p stopped-upgrade-20220602104942-2113 --memory=2200 --vm-driver=docker 
E0602 10:49:49.521332    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:50:10.001611    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1871654089.exe start -p stopped-upgrade-20220602104942-2113 --memory=2200 --vm-driver=docker : exit status 70 (34.526129311s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220602104942-2113] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3592941161
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:49:58.567495897 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220602104942-2113" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:50:15.137494853 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220602104942-2113", then "minikube start -p stopped-upgrade-20220602104942-2113 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.22 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.36 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 126.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 150.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 172.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 237.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 255.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 278.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 300.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 321.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 343.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 365.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 451.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 494.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 516.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:50:15.137494853 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1871654089.exe start -p stopped-upgrade-20220602104942-2113 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1871654089.exe start -p stopped-upgrade-20220602104942-2113 --memory=2200 --vm-driver=docker : exit status 70 (4.509985514s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220602104942-2113] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig102159593
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220602104942-2113" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1871654089.exe start -p stopped-upgrade-20220602104942-2113 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1871654089.exe start -p stopped-upgrade-20220602104942-2113 --memory=2200 --vm-driver=docker : exit status 70 (4.525278936s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220602104942-2113] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig4037992790
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220602104942-2113" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (46.05s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (62.74s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220602105035-2113 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220602105035-2113 --output=json --layout=cluster: exit status 2 (16.10735725s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220602105035-2113","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220602105035-2113","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220602105035-2113
helpers_test.go:235: (dbg) docker inspect pause-20220602105035-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd30b59cfdbeda6cc79727bfe67ea84447bf24b349ee9f1146ba30fc086ab8a2",
	        "Created": "2022-06-02T17:50:42.296862826Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134689,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:50:42.612010229Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/dd30b59cfdbeda6cc79727bfe67ea84447bf24b349ee9f1146ba30fc086ab8a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd30b59cfdbeda6cc79727bfe67ea84447bf24b349ee9f1146ba30fc086ab8a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd30b59cfdbeda6cc79727bfe67ea84447bf24b349ee9f1146ba30fc086ab8a2/hosts",
	        "LogPath": "/var/lib/docker/containers/dd30b59cfdbeda6cc79727bfe67ea84447bf24b349ee9f1146ba30fc086ab8a2/dd30b59cfdbeda6cc79727bfe67ea84447bf24b349ee9f1146ba30fc086ab8a2-json.log",
	        "Name": "/pause-20220602105035-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220602105035-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220602105035-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/76ef170b6035ee1d6e9c677d3fe0ce7ebe0b23f9bf6c7088ca06e946c3d68b5c-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/76ef170b6035ee1d6e9c677d3fe0ce7ebe0b23f9bf6c7088ca06e946c3d68b5c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/76ef170b6035ee1d6e9c677d3fe0ce7ebe0b23f9bf6c7088ca06e946c3d68b5c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/76ef170b6035ee1d6e9c677d3fe0ce7ebe0b23f9bf6c7088ca06e946c3d68b5c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220602105035-2113",
	                "Source": "/var/lib/docker/volumes/pause-20220602105035-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220602105035-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220602105035-2113",
	                "name.minikube.sigs.k8s.io": "pause-20220602105035-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0316f91e8afcfe11db9345ee400c4f5912f1493f7b4727933afc12a5128be0b0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63413"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63414"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63416"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0316f91e8afc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220602105035-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dd30b59cfdbe",
	                        "pause-20220602105035-2113"
	                    ],
	                    "NetworkID": "b7cdb924ea38c01113aae577db468930e0907f1f17600da4533412394f64f569",
	                    "EndpointID": "f903e21d5d2e6f85772b8bdb257cdf3458ad42a6841b3416e791a81c19917c98",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220602105035-2113 -n pause-20220602105035-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220602105035-2113 -n pause-20220602105035-2113: exit status 2 (16.104193962s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220602105035-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220602105035-2113 logs -n 25: (14.313977403s)
helpers_test.go:252: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                     | offline-docker-20220602104455-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:45 PDT | 02 Jun 22 10:45 PDT |
	|         | offline-docker-20220602104455-2113     |                                        |         |                |                     |                     |
	| start   | -p                                     | force-systemd-env-20220602104521-2113  | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:45 PDT | 02 Jun 22 10:45 PDT |
	|         | force-systemd-env-20220602104521-2113  |                                        |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr -v=5   |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| ssh     | force-systemd-env-20220602104521-2113  | force-systemd-env-20220602104521-2113  | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:45 PDT | 02 Jun 22 10:45 PDT |
	|         | ssh docker info --format               |                                        |         |                |                     |                     |
	|         | {{.CgroupDriver}}                      |                                        |         |                |                     |                     |
	| delete  | -p                                     | force-systemd-env-20220602104521-2113  | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:45 PDT | 02 Jun 22 10:45 PDT |
	|         | force-systemd-env-20220602104521-2113  |                                        |         |                |                     |                     |
	| start   | -p                                     | force-systemd-flag-20220602104538-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:45 PDT | 02 Jun 22 10:46 PDT |
	|         | force-systemd-flag-20220602104538-2113 |                                        |         |                |                     |                     |
	|         | --memory=2048 --force-systemd          |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker |                                        |         |                |                     |                     |
	|         |                                        |                                        |         |                |                     |                     |
	| ssh     | force-systemd-flag-20220602104538-2113 | force-systemd-flag-20220602104538-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | ssh docker info --format               |                                        |         |                |                     |                     |
	|         | {{.CgroupDriver}}                      |                                        |         |                |                     |                     |
	| delete  | -p                                     | force-systemd-flag-20220602104538-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | force-systemd-flag-20220602104538-2113 |                                        |         |                |                     |                     |
	| start   | -p                                     | docker-flags-20220602104549-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:45 PDT | 02 Jun 22 10:46 PDT |
	|         | docker-flags-20220602104549-2113       |                                        |         |                |                     |                     |
	|         | --cache-images=false                   |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=false                           |                                        |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                   |                                        |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                   |                                        |         |                |                     |                     |
	|         | --docker-opt=debug                     |                                        |         |                |                     |                     |
	|         | --docker-opt=icc=true                  |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=5                 |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| ssh     | docker-flags-20220602104549-2113       | docker-flags-20220602104549-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | ssh sudo systemctl show                |                                        |         |                |                     |                     |
	|         | docker --property=Environment          |                                        |         |                |                     |                     |
	|         | --no-pager                             |                                        |         |                |                     |                     |
	| ssh     | docker-flags-20220602104549-2113       | docker-flags-20220602104549-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | ssh sudo systemctl show docker         |                                        |         |                |                     |                     |
	|         | --property=ExecStart --no-pager        |                                        |         |                |                     |                     |
	| delete  | -p                                     | docker-flags-20220602104549-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | docker-flags-20220602104549-2113       |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220602104608-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | cert-expiration-20220602104608-2113    |                                        |         |                |                     |                     |
	|         | --memory=2048 --cert-expiration=3m     |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-options-20220602104618-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | cert-options-20220602104618-2113       |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                                        |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                                        |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                                        |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                                        |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	|         | --apiserver-name=localhost             |                                        |         |                |                     |                     |
	| ssh     | cert-options-20220602104618-2113       | cert-options-20220602104618-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | ssh openssl x509 -text -noout -in      |                                        |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |         |                |                     |                     |
	| ssh     | -p                                     | cert-options-20220602104618-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | cert-options-20220602104618-2113       |                                        |         |                |                     |                     |
	|         | -- sudo cat                            |                                        |         |                |                     |                     |
	|         | /etc/kubernetes/admin.conf             |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-options-20220602104618-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:46 PDT | 02 Jun 22 10:46 PDT |
	|         | cert-options-20220602104618-2113       |                                        |         |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220602104647-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:47 PDT | 02 Jun 22 10:47 PDT |
	|         | running-upgrade-20220602104647-2113    |                                        |         |                |                     |                     |
	| delete  | -p                                     | missing-upgrade-20220602104738-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:48 PDT | 02 Jun 22 10:48 PDT |
	|         | missing-upgrade-20220602104738-2113    |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220602104608-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:49 PDT | 02 Jun 22 10:49 PDT |
	|         | cert-expiration-20220602104608-2113    |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --cert-expiration=8760h                |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-expiration-20220602104608-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:49 PDT | 02 Jun 22 10:49 PDT |
	|         | cert-expiration-20220602104608-2113    |                                        |         |                |                     |                     |
	| logs    | -p                                     | stopped-upgrade-20220602104942-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:50 PDT | 02 Jun 22 10:50 PDT |
	|         | stopped-upgrade-20220602104942-2113    |                                        |         |                |                     |                     |
	| delete  | -p                                     | stopped-upgrade-20220602104942-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:50 PDT | 02 Jun 22 10:50 PDT |
	|         | stopped-upgrade-20220602104942-2113    |                                        |         |                |                     |                     |
	| start   | -p pause-20220602105035-2113           | pause-20220602105035-2113              | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:50 PDT | 02 Jun 22 10:51 PDT |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=all --driver=docker             |                                        |         |                |                     |                     |
	| start   | -p pause-20220602105035-2113           | pause-20220602105035-2113              | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:51 PDT | 02 Jun 22 10:51 PDT |
	|         | --alsologtostderr -v=1                 |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| pause   | -p pause-20220602105035-2113           | pause-20220602105035-2113              | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:51 PDT | 02 Jun 22 10:51 PDT |
	|         | --alsologtostderr -v=5                 |                                        |         |                |                     |                     |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 10:51:14
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 10:51:14.905008   10827 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:51:14.905171   10827 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:51:14.905176   10827 out.go:309] Setting ErrFile to fd 2...
	I0602 10:51:14.905180   10827 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:51:14.905274   10827 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:51:14.905538   10827 out.go:303] Setting JSON to false
	I0602 10:51:14.920750   10827 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3044,"bootTime":1654189230,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:51:14.920861   10827 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:51:14.943492   10827 out.go:177] * [pause-20220602105035-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:51:14.965441   10827 notify.go:193] Checking for updates...
	I0602 10:51:14.987119   10827 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 10:51:15.009446   10827 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:51:15.031305   10827 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:51:15.052104   10827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:51:15.073455   10827 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 10:51:15.095852   10827 config.go:178] Loaded profile config "pause-20220602105035-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:51:15.096495   10827 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 10:51:15.168245   10827 docker.go:137] docker version: linux-20.10.14
	I0602 10:51:15.168392   10827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:51:15.294685   10827 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:58 SystemTime:2022-06-02 17:51:15.239218058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:51:15.338230   10827 out.go:177] * Using the docker driver based on existing profile
	I0602 10:51:15.359353   10827 start.go:284] selected driver: docker
	I0602 10:51:15.359370   10827 start.go:806] validating driver "docker" against &{Name:pause-20220602105035-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220602105035-2113 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false}
	I0602 10:51:15.359441   10827 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 10:51:15.359663   10827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:51:15.495731   10827 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:58 SystemTime:2022-06-02 17:51:15.440364457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:51:15.497786   10827 cni.go:95] Creating CNI manager for ""
	I0602 10:51:15.497804   10827 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:51:15.497821   10827 start_flags.go:306] config:
	{Name:pause-20220602105035-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220602105035-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:51:15.541616   10827 out.go:177] * Starting control plane node pause-20220602105035-2113 in cluster pause-20220602105035-2113
	I0602 10:51:15.563711   10827 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 10:51:15.585396   10827 out.go:177] * Pulling base image ...
	I0602 10:51:15.628531   10827 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 10:51:15.628544   10827 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 10:51:15.628619   10827 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 10:51:15.628643   10827 cache.go:57] Caching tarball of preloaded images
	I0602 10:51:15.628845   10827 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 10:51:15.629297   10827 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 10:51:15.629876   10827 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/config.json ...
	I0602 10:51:15.694596   10827 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 10:51:15.694610   10827 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 10:51:15.694618   10827 cache.go:206] Successfully downloaded all kic artifacts
	I0602 10:51:15.694681   10827 start.go:352] acquiring machines lock for pause-20220602105035-2113: {Name:mk8e3320f6a9f02fed0b44f013b0572c3067741e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:51:15.694753   10827 start.go:356] acquired machines lock for "pause-20220602105035-2113" in 56.029µs
	I0602 10:51:15.694772   10827 start.go:94] Skipping create...Using existing machine configuration
	I0602 10:51:15.694782   10827 fix.go:55] fixHost starting: 
	I0602 10:51:15.695006   10827 cli_runner.go:164] Run: docker container inspect pause-20220602105035-2113 --format={{.State.Status}}
	I0602 10:51:15.765299   10827 fix.go:103] recreateIfNeeded on pause-20220602105035-2113: state=Running err=<nil>
	W0602 10:51:15.765327   10827 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 10:51:15.787349   10827 out.go:177] * Updating the running docker "pause-20220602105035-2113" container ...
	I0602 10:51:15.830154   10827 machine.go:88] provisioning docker machine ...
	I0602 10:51:15.830210   10827 ubuntu.go:169] provisioning hostname "pause-20220602105035-2113"
	I0602 10:51:15.830355   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:15.902048   10827 main.go:134] libmachine: Using SSH client type: native
	I0602 10:51:15.902234   10827 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63417 <nil> <nil>}
	I0602 10:51:15.902246   10827 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220602105035-2113 && echo "pause-20220602105035-2113" | sudo tee /etc/hostname
	I0602 10:51:16.028620   10827 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220602105035-2113
	
	I0602 10:51:16.028706   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:16.099529   10827 main.go:134] libmachine: Using SSH client type: native
	I0602 10:51:16.099768   10827 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63417 <nil> <nil>}
	I0602 10:51:16.099781   10827 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220602105035-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220602105035-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220602105035-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 10:51:16.219179   10827 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 10:51:16.219200   10827 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 10:51:16.219222   10827 ubuntu.go:177] setting up certificates
	I0602 10:51:16.219235   10827 provision.go:83] configureAuth start
	I0602 10:51:16.219305   10827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220602105035-2113
	I0602 10:51:16.289520   10827 provision.go:138] copyHostCerts
	I0602 10:51:16.289600   10827 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 10:51:16.289609   10827 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 10:51:16.289703   10827 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 10:51:16.289909   10827 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 10:51:16.289920   10827 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 10:51:16.289977   10827 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 10:51:16.290123   10827 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 10:51:16.290128   10827 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 10:51:16.290181   10827 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 10:51:16.290289   10827 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.pause-20220602105035-2113 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220602105035-2113]
	I0602 10:51:16.365285   10827 provision.go:172] copyRemoteCerts
	I0602 10:51:16.365335   10827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 10:51:16.365384   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:16.436288   10827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63417 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/pause-20220602105035-2113/id_rsa Username:docker}
	I0602 10:51:16.522250   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0602 10:51:16.539338   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 10:51:16.556178   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 10:51:16.573194   10827 provision.go:86] duration metric: configureAuth took 353.945442ms
	I0602 10:51:16.573214   10827 ubuntu.go:193] setting minikube options for container-runtime
	I0602 10:51:16.573348   10827 config.go:178] Loaded profile config "pause-20220602105035-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:51:16.573444   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:16.643642   10827 main.go:134] libmachine: Using SSH client type: native
	I0602 10:51:16.643800   10827 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63417 <nil> <nil>}
	I0602 10:51:16.643813   10827 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 10:51:16.761911   10827 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 10:51:16.761922   10827 ubuntu.go:71] root file system type: overlay
	I0602 10:51:16.762042   10827 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 10:51:16.762109   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:16.832808   10827 main.go:134] libmachine: Using SSH client type: native
	I0602 10:51:16.832946   10827 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63417 <nil> <nil>}
	I0602 10:51:16.833028   10827 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 10:51:16.959718   10827 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 10:51:16.959806   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:17.030144   10827 main.go:134] libmachine: Using SSH client type: native
	I0602 10:51:17.030283   10827 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63417 <nil> <nil>}
	I0602 10:51:17.030297   10827 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 10:51:17.149379   10827 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 10:51:17.149395   10827 machine.go:91] provisioned docker machine in 1.319218373s
	I0602 10:51:17.149405   10827 start.go:306] post-start starting for "pause-20220602105035-2113" (driver="docker")
	I0602 10:51:17.149410   10827 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 10:51:17.149485   10827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 10:51:17.149537   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:17.220202   10827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63417 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/pause-20220602105035-2113/id_rsa Username:docker}
	I0602 10:51:17.306689   10827 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 10:51:17.310321   10827 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 10:51:17.310336   10827 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 10:51:17.310349   10827 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 10:51:17.310354   10827 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 10:51:17.310362   10827 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 10:51:17.310475   10827 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 10:51:17.310617   10827 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 10:51:17.310782   10827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 10:51:17.317861   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:51:17.335230   10827 start.go:309] post-start completed in 185.815963ms
	I0602 10:51:17.335314   10827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:51:17.335367   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:17.405738   10827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63417 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/pause-20220602105035-2113/id_rsa Username:docker}
	I0602 10:51:17.489909   10827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:51:17.494464   10827 fix.go:57] fixHost completed within 1.799673586s
	I0602 10:51:17.494477   10827 start.go:81] releasing machines lock for "pause-20220602105035-2113", held for 1.799709758s
	I0602 10:51:17.494551   10827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220602105035-2113
	I0602 10:51:17.564785   10827 ssh_runner.go:195] Run: systemctl --version
	I0602 10:51:17.564797   10827 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 10:51:17.564842   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:17.564853   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:17.641822   10827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63417 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/pause-20220602105035-2113/id_rsa Username:docker}
	I0602 10:51:17.643673   10827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63417 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/pause-20220602105035-2113/id_rsa Username:docker}
	I0602 10:51:17.726719   10827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 10:51:17.860253   10827 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:51:17.871797   10827 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 10:51:17.871871   10827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 10:51:17.881207   10827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 10:51:17.895417   10827 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 10:51:17.998628   10827 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 10:51:18.097410   10827 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:51:18.107176   10827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 10:51:18.206407   10827 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 10:51:18.216141   10827 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:51:18.250129   10827 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:51:18.329038   10827 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 10:51:18.329117   10827 cli_runner.go:164] Run: docker exec -t pause-20220602105035-2113 dig +short host.docker.internal
	I0602 10:51:18.454315   10827 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 10:51:18.454417   10827 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 10:51:18.458605   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:18.529798   10827 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 10:51:18.529884   10827 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:51:18.562080   10827 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 10:51:18.562095   10827 docker.go:541] Images already preloaded, skipping extraction
	I0602 10:51:18.562156   10827 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:51:18.590978   10827 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 10:51:18.591002   10827 cache_images.go:84] Images are preloaded, skipping loading
	I0602 10:51:18.591072   10827 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 10:51:18.661948   10827 cni.go:95] Creating CNI manager for ""
	I0602 10:51:18.661960   10827 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:51:18.661975   10827 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 10:51:18.661987   10827 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220602105035-2113 NodeName:pause-20220602105035-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minik
ube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 10:51:18.662139   10827 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "pause-20220602105035-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 10:51:18.662227   10827 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=pause-20220602105035-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:pause-20220602105035-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 10:51:18.662283   10827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 10:51:18.670036   10827 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 10:51:18.670098   10827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 10:51:18.677506   10827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0602 10:51:18.689649   10827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 10:51:18.702380   10827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes)
	I0602 10:51:18.715309   10827 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 10:51:18.719144   10827 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113 for IP: 192.168.49.2
	I0602 10:51:18.719259   10827 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 10:51:18.719313   10827 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 10:51:18.719395   10827 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/client.key
	I0602 10:51:18.719446   10827 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/apiserver.key.dd3b5fb2
	I0602 10:51:18.719495   10827 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/proxy-client.key
	I0602 10:51:18.719702   10827 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 10:51:18.719741   10827 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 10:51:18.719752   10827 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 10:51:18.719786   10827 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 10:51:18.719820   10827 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 10:51:18.719848   10827 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 10:51:18.719913   10827 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:51:18.721051   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 10:51:18.739812   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 10:51:18.758194   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 10:51:18.775538   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 10:51:18.792193   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 10:51:18.810994   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 10:51:18.830285   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 10:51:18.848334   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 10:51:18.865310   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 10:51:18.882001   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 10:51:18.898340   10827 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 10:51:18.914659   10827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 10:51:18.927453   10827 ssh_runner.go:195] Run: openssl version
	I0602 10:51:18.932342   10827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 10:51:18.940691   10827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 10:51:18.944636   10827 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 10:51:18.944677   10827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 10:51:18.950010   10827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 10:51:18.957445   10827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 10:51:18.965198   10827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 10:51:18.969291   10827 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 10:51:18.969340   10827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 10:51:18.974562   10827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 10:51:18.981735   10827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 10:51:18.989827   10827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:51:18.993584   10827 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:51:18.993624   10827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:51:18.999119   10827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 10:51:19.006676   10827 kubeadm.go:395] StartCluster: {Name:pause-20220602105035-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220602105035-2113 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false}
	I0602 10:51:19.006780   10827 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 10:51:19.034818   10827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 10:51:19.043011   10827 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 10:51:19.043026   10827 kubeadm.go:626] restartCluster start
	I0602 10:51:19.043084   10827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 10:51:19.050063   10827 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 10:51:19.050127   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:19.121954   10827 kubeconfig.go:92] found "pause-20220602105035-2113" server: "https://127.0.0.1:63416"
	I0602 10:51:19.122361   10827 kapi.go:59] client config for pause-20220602105035-2113: &rest.Config{Host:"https://127.0.0.1:63416", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/client.key"
, CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 10:51:19.122880   10827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 10:51:19.132218   10827 api_server.go:165] Checking apiserver status ...
	I0602 10:51:19.132297   10827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 10:51:19.141802   10827 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1596/cgroup
	W0602 10:51:19.150524   10827 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1596/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0602 10:51:19.150537   10827 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63416/healthz ...
	I0602 10:51:19.156308   10827 api_server.go:266] https://127.0.0.1:63416/healthz returned 200:
	ok
	I0602 10:51:19.167438   10827 system_pods.go:86] 7 kube-system pods found
	I0602 10:51:19.167459   10827 system_pods.go:89] "coredns-64897985d-m4864" [90023fe1-6712-4c41-b4dc-7e2bf555424b] Running
	I0602 10:51:19.167464   10827 system_pods.go:89] "coredns-64897985d-nr8b7" [c3ed3851-e3ea-410a-8099-98ac12bba157] Running
	I0602 10:51:19.167469   10827 system_pods.go:89] "etcd-pause-20220602105035-2113" [8af7fce6-fdfe-4225-89c4-0130b6606e5e] Running
	I0602 10:51:19.167473   10827 system_pods.go:89] "kube-apiserver-pause-20220602105035-2113" [444ba72e-4014-4099-8acc-dabaeca53bc3] Running
	I0602 10:51:19.167477   10827 system_pods.go:89] "kube-controller-manager-pause-20220602105035-2113" [3f4aa991-b8d8-4907-b9c7-8bf54fdb9bc3] Running
	I0602 10:51:19.167481   10827 system_pods.go:89] "kube-proxy-4qmtk" [ed3a457d-a11f-4b62-b1bd-26e05b8f4744] Running
	I0602 10:51:19.167485   10827 system_pods.go:89] "kube-scheduler-pause-20220602105035-2113" [9a9738ea-f526-449b-a650-732909a62181] Running
	I0602 10:51:19.168738   10827 api_server.go:140] control plane version: v1.23.6
	I0602 10:51:19.168748   10827 kubeadm.go:620] The running cluster does not require reconfiguration: 127.0.0.1
	I0602 10:51:19.168753   10827 kubeadm.go:674] Taking a shortcut, as the cluster seems to be properly configured
	I0602 10:51:19.168760   10827 kubeadm.go:630] restartCluster took 125.730142ms
	I0602 10:51:19.168767   10827 kubeadm.go:397] StartCluster complete in 162.097661ms
	I0602 10:51:19.168778   10827 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:51:19.168855   10827 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:51:19.169275   10827 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:51:19.170068   10827 kapi.go:59] client config for pause-20220602105035-2113: &rest.Config{Host:"https://127.0.0.1:63416", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/client.key"
, CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 10:51:19.172331   10827 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220602105035-2113" rescaled to 1
	I0602 10:51:19.172376   10827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 10:51:19.172375   10827 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 10:51:19.172397   10827 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 10:51:19.172536   10827 config.go:178] Loaded profile config "pause-20220602105035-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:51:19.215707   10827 out.go:177] * Verifying Kubernetes components...
	I0602 10:51:19.215805   10827 addons.go:65] Setting default-storageclass=true in profile "pause-20220602105035-2113"
	I0602 10:51:19.215805   10827 addons.go:65] Setting storage-provisioner=true in profile "pause-20220602105035-2113"
	I0602 10:51:19.236629   10827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220602105035-2113"
	I0602 10:51:19.236648   10827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 10:51:19.223093   10827 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0602 10:51:19.236660   10827 addons.go:153] Setting addon storage-provisioner=true in "pause-20220602105035-2113"
	W0602 10:51:19.236674   10827 addons.go:165] addon storage-provisioner should already be in state true
	I0602 10:51:19.236747   10827 host.go:66] Checking if "pause-20220602105035-2113" exists ...
	I0602 10:51:19.237086   10827 cli_runner.go:164] Run: docker container inspect pause-20220602105035-2113 --format={{.State.Status}}
	I0602 10:51:19.238323   10827 cli_runner.go:164] Run: docker container inspect pause-20220602105035-2113 --format={{.State.Status}}
	I0602 10:51:19.249169   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:19.318143   10827 kapi.go:59] client config for pause-20220602105035-2113: &rest.Config{Host:"https://127.0.0.1:63416", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/pause-20220602105035-2113/client.key"
, CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 10:51:19.321018   10827 addons.go:153] Setting addon default-storageclass=true in "pause-20220602105035-2113"
	W0602 10:51:19.321029   10827 addons.go:165] addon default-storageclass should already be in state true
	I0602 10:51:19.321045   10827 host.go:66] Checking if "pause-20220602105035-2113" exists ...
	I0602 10:51:19.321375   10827 cli_runner.go:164] Run: docker container inspect pause-20220602105035-2113 --format={{.State.Status}}
	I0602 10:51:19.345361   10827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 10:51:19.366047   10827 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 10:51:19.366061   10827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 10:51:19.366125   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:19.368074   10827 node_ready.go:35] waiting up to 6m0s for node "pause-20220602105035-2113" to be "Ready" ...
	I0602 10:51:19.371540   10827 node_ready.go:49] node "pause-20220602105035-2113" has status "Ready":"True"
	I0602 10:51:19.371550   10827 node_ready.go:38] duration metric: took 3.369128ms waiting for node "pause-20220602105035-2113" to be "Ready" ...
	I0602 10:51:19.371556   10827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 10:51:19.377260   10827 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-m4864" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.383231   10827 pod_ready.go:92] pod "coredns-64897985d-m4864" in "kube-system" namespace has status "Ready":"True"
	I0602 10:51:19.383241   10827 pod_ready.go:81] duration metric: took 5.967581ms waiting for pod "coredns-64897985d-m4864" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.383251   10827 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-nr8b7" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.388074   10827 pod_ready.go:92] pod "coredns-64897985d-nr8b7" in "kube-system" namespace has status "Ready":"True"
	I0602 10:51:19.388083   10827 pod_ready.go:81] duration metric: took 4.826634ms waiting for pod "coredns-64897985d-nr8b7" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.388089   10827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220602105035-2113" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.393098   10827 pod_ready.go:92] pod "etcd-pause-20220602105035-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 10:51:19.393108   10827 pod_ready.go:81] duration metric: took 5.014762ms waiting for pod "etcd-pause-20220602105035-2113" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.393115   10827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220602105035-2113" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.394975   10827 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 10:51:19.394987   10827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 10:51:19.395049   10827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220602105035-2113
	I0602 10:51:19.399939   10827 pod_ready.go:92] pod "kube-apiserver-pause-20220602105035-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 10:51:19.399953   10827 pod_ready.go:81] duration metric: took 6.833911ms waiting for pod "kube-apiserver-pause-20220602105035-2113" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.399960   10827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220602105035-2113" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.441512   10827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63417 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/pause-20220602105035-2113/id_rsa Username:docker}
	I0602 10:51:19.465949   10827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63417 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/pause-20220602105035-2113/id_rsa Username:docker}
	I0602 10:51:19.532713   10827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 10:51:19.558229   10827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 10:51:19.805464   10827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0602 10:51:19.842626   10827 addons.go:417] enableAddons completed in 670.206545ms
	I0602 10:51:19.844171   10827 pod_ready.go:92] pod "kube-controller-manager-pause-20220602105035-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 10:51:19.844180   10827 pod_ready.go:81] duration metric: took 444.212361ms waiting for pod "kube-controller-manager-pause-20220602105035-2113" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:19.844187   10827 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4qmtk" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:20.171603   10827 pod_ready.go:92] pod "kube-proxy-4qmtk" in "kube-system" namespace has status "Ready":"True"
	I0602 10:51:20.171612   10827 pod_ready.go:81] duration metric: took 327.419769ms waiting for pod "kube-proxy-4qmtk" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:20.171620   10827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220602105035-2113" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:20.571863   10827 pod_ready.go:92] pod "kube-scheduler-pause-20220602105035-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 10:51:20.571873   10827 pod_ready.go:81] duration metric: took 400.24689ms waiting for pod "kube-scheduler-pause-20220602105035-2113" in "kube-system" namespace to be "Ready" ...
	I0602 10:51:20.571878   10827 pod_ready.go:38] duration metric: took 1.200307899s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 10:51:20.571895   10827 api_server.go:51] waiting for apiserver process to appear ...
	I0602 10:51:20.571940   10827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 10:51:20.582727   10827 api_server.go:71] duration metric: took 1.410323575s to wait for apiserver process to appear ...
	I0602 10:51:20.582751   10827 api_server.go:87] waiting for apiserver healthz status ...
	I0602 10:51:20.582761   10827 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63416/healthz ...
	I0602 10:51:20.588027   10827 api_server.go:266] https://127.0.0.1:63416/healthz returned 200:
	ok
	I0602 10:51:20.589146   10827 api_server.go:140] control plane version: v1.23.6
	I0602 10:51:20.589155   10827 api_server.go:130] duration metric: took 6.398592ms to wait for apiserver health ...
	I0602 10:51:20.589160   10827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 10:51:20.774455   10827 system_pods.go:59] 8 kube-system pods found
	I0602 10:51:20.774469   10827 system_pods.go:61] "coredns-64897985d-m4864" [90023fe1-6712-4c41-b4dc-7e2bf555424b] Running
	I0602 10:51:20.774472   10827 system_pods.go:61] "coredns-64897985d-nr8b7" [c3ed3851-e3ea-410a-8099-98ac12bba157] Running
	I0602 10:51:20.774476   10827 system_pods.go:61] "etcd-pause-20220602105035-2113" [8af7fce6-fdfe-4225-89c4-0130b6606e5e] Running
	I0602 10:51:20.774479   10827 system_pods.go:61] "kube-apiserver-pause-20220602105035-2113" [444ba72e-4014-4099-8acc-dabaeca53bc3] Running
	I0602 10:51:20.774483   10827 system_pods.go:61] "kube-controller-manager-pause-20220602105035-2113" [3f4aa991-b8d8-4907-b9c7-8bf54fdb9bc3] Running
	I0602 10:51:20.774499   10827 system_pods.go:61] "kube-proxy-4qmtk" [ed3a457d-a11f-4b62-b1bd-26e05b8f4744] Running
	I0602 10:51:20.774506   10827 system_pods.go:61] "kube-scheduler-pause-20220602105035-2113" [9a9738ea-f526-449b-a650-732909a62181] Running
	I0602 10:51:20.774514   10827 system_pods.go:61] "storage-provisioner" [58717d3f-687d-4f81-bef2-fd1b4d639863] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 10:51:20.774524   10827 system_pods.go:74] duration metric: took 185.354842ms to wait for pod list to return data ...
	I0602 10:51:20.774530   10827 default_sa.go:34] waiting for default service account to be created ...
	I0602 10:51:20.974015   10827 default_sa.go:45] found service account: "default"
	I0602 10:51:20.974028   10827 default_sa.go:55] duration metric: took 199.493032ms for default service account to be created ...
	I0602 10:51:20.974035   10827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 10:51:21.173685   10827 system_pods.go:86] 8 kube-system pods found
	I0602 10:51:21.173700   10827 system_pods.go:89] "coredns-64897985d-m4864" [90023fe1-6712-4c41-b4dc-7e2bf555424b] Running
	I0602 10:51:21.173704   10827 system_pods.go:89] "coredns-64897985d-nr8b7" [c3ed3851-e3ea-410a-8099-98ac12bba157] Running
	I0602 10:51:21.173708   10827 system_pods.go:89] "etcd-pause-20220602105035-2113" [8af7fce6-fdfe-4225-89c4-0130b6606e5e] Running
	I0602 10:51:21.173712   10827 system_pods.go:89] "kube-apiserver-pause-20220602105035-2113" [444ba72e-4014-4099-8acc-dabaeca53bc3] Running
	I0602 10:51:21.173716   10827 system_pods.go:89] "kube-controller-manager-pause-20220602105035-2113" [3f4aa991-b8d8-4907-b9c7-8bf54fdb9bc3] Running
	I0602 10:51:21.173719   10827 system_pods.go:89] "kube-proxy-4qmtk" [ed3a457d-a11f-4b62-b1bd-26e05b8f4744] Running
	I0602 10:51:21.173723   10827 system_pods.go:89] "kube-scheduler-pause-20220602105035-2113" [9a9738ea-f526-449b-a650-732909a62181] Running
	I0602 10:51:21.173729   10827 system_pods.go:89] "storage-provisioner" [58717d3f-687d-4f81-bef2-fd1b4d639863] Running
	I0602 10:51:21.173734   10827 system_pods.go:126] duration metric: took 199.695365ms to wait for k8s-apps to be running ...
	I0602 10:51:21.173742   10827 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 10:51:21.173794   10827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 10:51:21.183736   10827 system_svc.go:56] duration metric: took 9.99142ms WaitForService to wait for kubelet.
	I0602 10:51:21.183749   10827 kubeadm.go:572] duration metric: took 2.011354958s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 10:51:21.183767   10827 node_conditions.go:102] verifying NodePressure condition ...
	I0602 10:51:21.371867   10827 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 10:51:21.371889   10827 node_conditions.go:123] node cpu capacity is 6
	I0602 10:51:21.371905   10827 node_conditions.go:105] duration metric: took 188.133524ms to run NodePressure ...
	I0602 10:51:21.371914   10827 start.go:213] waiting for startup goroutines ...
	I0602 10:51:21.401967   10827 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 10:51:21.424748   10827 out.go:177] * Done! kubectl is now configured to use "pause-20220602105035-2113" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 17:50:42 UTC, end at Thu 2022-06-02 17:51:55 UTC. --
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[128]: time="2022-06-02T17:50:45.102168256Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[128]: time="2022-06-02T17:50:45.102760859Z" level=info msg="Daemon shutdown complete"
	Jun 02 17:50:45 pause-20220602105035-2113 systemd[1]: docker.service: Succeeded.
	Jun 02 17:50:45 pause-20220602105035-2113 systemd[1]: Stopped Docker Application Container Engine.
	Jun 02 17:50:45 pause-20220602105035-2113 systemd[1]: Starting Docker Application Container Engine...
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.145884465Z" level=info msg="Starting up"
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.147627699Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.147659785Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.147678680Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.147689403Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.149024932Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.149060664Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.149077755Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.149084928Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.153848327Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.157910481Z" level=info msg="Loading containers: start."
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.231113726Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.260527196Z" level=info msg="Loading containers: done."
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.268337010Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.268394332Z" level=info msg="Daemon has completed initialization"
	Jun 02 17:50:45 pause-20220602105035-2113 systemd[1]: Started Docker Application Container Engine.
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.290696359Z" level=info msg="API listen on [::]:2376"
	Jun 02 17:50:45 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:50:45.293458519Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 02 17:51:21 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:51:21.149217408Z" level=info msg="ignoring event" container=6ecad869aac0b1bb1926dcbc132ef1ebe39a4ee3a5f08b4e7714d547dd646118 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:51:21 pause-20220602105035-2113 dockerd[383]: time="2022-06-02T17:51:21.192117571Z" level=info msg="ignoring event" container=3197a8a203d5949253dde87be8b45ebc84c934a79382f553836a0c0de9563106 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS                       PORTS     NAMES
	8269f7092418   6e38f40d628d           "/storage-provisioner"   37 seconds ago       Up 37 seconds (Paused)                 k8s_storage-provisioner_storage-provisioner_kube-system_58717d3f-687d-4f81-bef2-fd1b4d639863_0
	791a01ae4732   k8s.gcr.io/pause:3.6   "/pause"                 37 seconds ago       Up 37 seconds (Paused)                 k8s_POD_storage-provisioner_kube-system_58717d3f-687d-4f81-bef2-fd1b4d639863_0
	31905f84935d   a4ca41631cc7           "/coredns -conf /etc…"   46 seconds ago       Up 45 seconds (Paused)                 k8s_coredns_coredns-64897985d-m4864_kube-system_90023fe1-6712-4c41-b4dc-7e2bf555424b_0
	a7ca8743ea8f   k8s.gcr.io/pause:3.6   "/pause"                 47 seconds ago       Up 46 seconds (Paused)                 k8s_POD_coredns-64897985d-m4864_kube-system_90023fe1-6712-4c41-b4dc-7e2bf555424b_0
	5b9092197fce   4c0375452406           "/usr/local/bin/kube…"   47 seconds ago       Up 46 seconds (Paused)                 k8s_kube-proxy_kube-proxy-4qmtk_kube-system_ed3a457d-a11f-4b62-b1bd-26e05b8f4744_0
	ea7b39c68023   k8s.gcr.io/pause:3.6   "/pause"                 47 seconds ago       Up 46 seconds (Paused)                 k8s_POD_kube-proxy-4qmtk_kube-system_ed3a457d-a11f-4b62-b1bd-26e05b8f4744_0
	3197a8a203d5   k8s.gcr.io/pause:3.6   "/pause"                 47 seconds ago       Exited (0) 36 seconds ago              k8s_POD_coredns-64897985d-nr8b7_kube-system_c3ed3851-e3ea-410a-8099-98ac12bba157_0
	086b0da9736c   8fa62c12256d           "kube-apiserver --ad…"   About a minute ago   Up About a minute (Paused)             k8s_kube-apiserver_kube-apiserver-pause-20220602105035-2113_kube-system_0640f46b958f0642ac844c6a5c12b9a0_0
	e957f316ecf7   df7b72818ad2           "kube-controller-man…"   About a minute ago   Up About a minute (Paused)             k8s_kube-controller-manager_kube-controller-manager-pause-20220602105035-2113_kube-system_b1c4febb2e1504da8a9e08fe2d12cfba_0
	31a84b48703e   25f8c7f3da61           "etcd --advertise-cl…"   About a minute ago   Up About a minute (Paused)             k8s_etcd_etcd-pause-20220602105035-2113_kube-system_eb448234916cca05e4482d6eb7753b49_0
	3ef0ff8f3677   595f327f224a           "kube-scheduler --au…"   About a minute ago   Up About a minute (Paused)             k8s_kube-scheduler_kube-scheduler-pause-20220602105035-2113_kube-system_6913279ba7fe2439ede5a55551db4503_0
	ea23c804e916   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-scheduler-pause-20220602105035-2113_kube-system_6913279ba7fe2439ede5a55551db4503_0
	d20628a4bcd0   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-controller-manager-pause-20220602105035-2113_kube-system_b1c4febb2e1504da8a9e08fe2d12cfba_0
	d0213581116a   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-apiserver-pause-20220602105035-2113_kube-system_0640f46b958f0642ac844c6a5c12b9a0_0
	7e6019da3a44   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_etcd-pause-20220602105035-2113_kube-system_eb448234916cca05e4482d6eb7753b49_0
	time="2022-06-02T17:51:57Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> coredns [31905f84935d] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001438] FS-Cache: O-key=[8] '8212da0200000000'
	[  +0.001117] FS-Cache: N-cookie c=0000000016a13b2d [p=000000009900b4c5 fl=2 nc=0 na=1]
	[  +0.001771] FS-Cache: N-cookie d=00000000c757acde n=00000000578e5195
	[  +0.001432] FS-Cache: N-key=[8] '8212da0200000000'
	[  +0.001802] FS-Cache: Duplicate cookie detected
	[  +0.000997] FS-Cache: O-cookie c=00000000d3155457 [p=000000009900b4c5 fl=226 nc=0 na=1]
	[  +0.001776] FS-Cache: O-cookie d=00000000c757acde n=00000000a75076f7
	[  +0.001441] FS-Cache: O-key=[8] '8212da0200000000'
	[  +0.001141] FS-Cache: N-cookie c=0000000016a13b2d [p=000000009900b4c5 fl=2 nc=0 na=1]
	[  +0.001768] FS-Cache: N-cookie d=00000000c757acde n=00000000de0e4618
	[  +0.001450] FS-Cache: N-key=[8] '8212da0200000000'
	[  +3.961276] FS-Cache: Duplicate cookie detected
	[  +0.001030] FS-Cache: O-cookie c=00000000422eca30 [p=000000009900b4c5 fl=226 nc=0 na=1]
	[  +0.001768] FS-Cache: O-cookie d=00000000c757acde n=00000000a474c9b7
	[  +0.001460] FS-Cache: O-key=[8] '8112da0200000000'
	[  +0.001115] FS-Cache: N-cookie c=00000000714715d4 [p=000000009900b4c5 fl=2 nc=0 na=1]
	[  +0.001757] FS-Cache: N-cookie d=00000000c757acde n=00000000de0e4618
	[  +0.001432] FS-Cache: N-key=[8] '8112da0200000000'
	[  +0.427724] FS-Cache: Duplicate cookie detected
	[  +0.001023] FS-Cache: O-cookie c=00000000f7889376 [p=000000009900b4c5 fl=226 nc=0 na=1]
	[  +0.001778] FS-Cache: O-cookie d=00000000c757acde n=0000000016dfe9c6
	[  +0.001461] FS-Cache: O-key=[8] '8b12da0200000000'
	[  +0.001088] FS-Cache: N-cookie c=00000000c2958e6c [p=000000009900b4c5 fl=2 nc=0 na=1]
	[  +0.001745] FS-Cache: N-cookie d=00000000c757acde n=0000000017be1b1a
	[  +0.001394] FS-Cache: N-key=[8] '8b12da0200000000'
	
	* 
	* ==> etcd [31a84b48703e] <==
	* {"level":"info","ts":"2022-06-02T17:50:52.197Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T17:50:52.197Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T17:50:52.197Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T17:50:52.197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-02T17:50:52.197Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-02T17:50:52.197Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:50:52.197Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:50:52.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-02T17:50:52.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T17:50:52.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-02T17:50:52.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:50:52.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:50:52.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-02T17:50:52.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:50:52.980Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:50:52.981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:50:52.981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:50:52.981Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:50:52.981Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:pause-20220602105035-2113 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:50:52.983Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:50:52.983Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:50:52.983Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:50:52.983Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:50:52.984Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:50:52.984Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  17:52:08 up 40 min,  0 users,  load average: 1.24, 1.60, 1.31
	Linux pause-20220602105035-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [086b0da9736c] <==
	* I0602 17:50:54.748851       1 cache.go:39] Caches are synced for autoregister controller
	I0602 17:50:54.750602       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0602 17:50:54.752486       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0602 17:50:54.777817       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0602 17:50:54.778043       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0602 17:50:54.778708       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0602 17:50:55.648253       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0602 17:50:55.648931       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 17:50:55.653533       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0602 17:50:55.656135       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0602 17:50:55.656193       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0602 17:50:55.980351       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 17:50:56.004204       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 17:50:56.098495       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0602 17:50:56.102058       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0602 17:50:56.102885       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 17:50:56.105622       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 17:50:56.786687       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 17:50:57.377121       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 17:50:57.382315       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0602 17:50:57.391158       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 17:50:57.546604       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 17:51:09.818983       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 17:51:10.470857       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 17:51:11.273081       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [e957f316ecf7] <==
	* I0602 17:51:09.818969       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0602 17:51:09.818974       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0602 17:51:09.819178       1 event.go:294] "Event occurred" object="pause-20220602105035-2113" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220602105035-2113 event: Registered Node pause-20220602105035-2113 in Controller"
	I0602 17:51:09.823511       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4qmtk"
	I0602 17:51:09.827937       1 shared_informer.go:247] Caches are synced for namespace 
	I0602 17:51:09.827973       1 shared_informer.go:247] Caches are synced for PV protection 
	I0602 17:51:09.829189       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0602 17:51:09.864092       1 shared_informer.go:247] Caches are synced for deployment 
	I0602 17:51:09.864196       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0602 17:51:09.876018       1 shared_informer.go:247] Caches are synced for stateful set 
	I0602 17:51:09.906508       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0602 17:51:09.910328       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 17:51:09.964384       1 shared_informer.go:247] Caches are synced for disruption 
	I0602 17:51:09.964444       1 disruption.go:371] Sending events to api server.
	I0602 17:51:10.001065       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:51:10.032418       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:51:10.451959       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:51:10.472852       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 17:51:10.481819       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 17:51:10.531933       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:51:10.531990       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 17:51:10.622645       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-nr8b7"
	I0602 17:51:10.627557       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-m4864"
	I0602 17:51:10.641513       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-nr8b7"
	W0602 17:51:21.990224       1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	
	* 
	* ==> kube-proxy [5b9092197fce] <==
	* I0602 17:51:11.250395       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:51:11.250444       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:51:11.250487       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:51:11.270927       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:51:11.270961       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:51:11.270967       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:51:11.270976       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:51:11.271229       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:51:11.271636       1 config.go:317] "Starting service config controller"
	I0602 17:51:11.271670       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:51:11.271726       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:51:11.271732       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:51:11.372309       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 17:51:11.372379       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [3ef0ff8f3677] <==
	* W0602 17:50:54.693234       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:50:54.693218       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 17:50:54.693249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 17:50:54.693249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:50:54.693423       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 17:50:54.693452       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0602 17:50:54.693576       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 17:50:54.693610       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 17:50:54.693582       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 17:50:54.693623       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:50:54.693636       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 17:50:54.693645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 17:50:55.589828       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 17:50:55.589892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:50:55.635860       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 17:50:55.635930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:50:55.709547       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 17:50:55.709645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:50:55.754534       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 17:50:55.754573       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 17:50:55.785545       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0602 17:50:55.785582       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0602 17:50:55.870501       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 17:50:55.870592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0602 17:50:56.188206       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 17:50:42 UTC, end at Thu 2022-06-02 17:52:08 UTC. --
	Jun 02 17:51:10 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:10.903782    1784 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ea7b39c68023fff46aaf8f7be287b1612532cfe8e3652adca13161b0f014a0fc"
	Jun 02 17:51:10 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:10.905206    1784 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-nr8b7 through plugin: invalid network status for"
	Jun 02 17:51:10 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:10.905931    1784 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3197a8a203d5949253dde87be8b45ebc84c934a79382f553836a0c0de9563106"
	Jun 02 17:51:11 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:11.397291    1784 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-m4864 through plugin: invalid network status for"
	Jun 02 17:51:11 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:11.915529    1784 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-nr8b7 through plugin: invalid network status for"
	Jun 02 17:51:11 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:11.919291    1784 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-m4864 through plugin: invalid network status for"
	Jun 02 17:51:19 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:19.758062    1784 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:51:19 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:19.794650    1784 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58717d3f-687d-4f81-bef2-fd1b4d639863-tmp\") pod \"storage-provisioner\" (UID: \"58717d3f-687d-4f81-bef2-fd1b4d639863\") " pod="kube-system/storage-provisioner"
	Jun 02 17:51:19 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:19.794702    1784 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqw56\" (UniqueName: \"kubernetes.io/projected/58717d3f-687d-4f81-bef2-fd1b4d639863-kube-api-access-dqw56\") pod \"storage-provisioner\" (UID: \"58717d3f-687d-4f81-bef2-fd1b4d639863\") " pod="kube-system/storage-provisioner"
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:21.305219    1784 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pntw5\" (UniqueName: \"kubernetes.io/projected/c3ed3851-e3ea-410a-8099-98ac12bba157-kube-api-access-pntw5\") pod \"c3ed3851-e3ea-410a-8099-98ac12bba157\" (UID: \"c3ed3851-e3ea-410a-8099-98ac12bba157\") "
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:21.305279    1784 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3ed3851-e3ea-410a-8099-98ac12bba157-config-volume\") pod \"c3ed3851-e3ea-410a-8099-98ac12bba157\" (UID: \"c3ed3851-e3ea-410a-8099-98ac12bba157\") "
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: W0602 17:51:21.305541    1784 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/c3ed3851-e3ea-410a-8099-98ac12bba157/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:21.305823    1784 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ed3851-e3ea-410a-8099-98ac12bba157-config-volume" (OuterVolumeSpecName: "config-volume") pod "c3ed3851-e3ea-410a-8099-98ac12bba157" (UID: "c3ed3851-e3ea-410a-8099-98ac12bba157"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:21.308339    1784 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ed3851-e3ea-410a-8099-98ac12bba157-kube-api-access-pntw5" (OuterVolumeSpecName: "kube-api-access-pntw5") pod "c3ed3851-e3ea-410a-8099-98ac12bba157" (UID: "c3ed3851-e3ea-410a-8099-98ac12bba157"). InnerVolumeSpecName "kube-api-access-pntw5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:21.406411    1784 reconciler.go:300] "Volume detached for volume \"kube-api-access-pntw5\" (UniqueName: \"kubernetes.io/projected/c3ed3851-e3ea-410a-8099-98ac12bba157-kube-api-access-pntw5\") on node \"pause-20220602105035-2113\" DevicePath \"\""
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:21.406458    1784 reconciler.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3ed3851-e3ea-410a-8099-98ac12bba157-config-volume\") on node \"pause-20220602105035-2113\" DevicePath \"\""
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:21.974856    1784 scope.go:110] "RemoveContainer" containerID="6ecad869aac0b1bb1926dcbc132ef1ebe39a4ee3a5f08b4e7714d547dd646118"
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:21.986385    1784 scope.go:110] "RemoveContainer" containerID="6ecad869aac0b1bb1926dcbc132ef1ebe39a4ee3a5f08b4e7714d547dd646118"
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: E0602 17:51:21.987724    1784 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 6ecad869aac0b1bb1926dcbc132ef1ebe39a4ee3a5f08b4e7714d547dd646118" containerID="6ecad869aac0b1bb1926dcbc132ef1ebe39a4ee3a5f08b4e7714d547dd646118"
	Jun 02 17:51:21 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:21.988460    1784 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:6ecad869aac0b1bb1926dcbc132ef1ebe39a4ee3a5f08b4e7714d547dd646118} err="failed to get container status \"6ecad869aac0b1bb1926dcbc132ef1ebe39a4ee3a5f08b4e7714d547dd646118\": rpc error: code = Unknown desc = Error: No such container: 6ecad869aac0b1bb1926dcbc132ef1ebe39a4ee3a5f08b4e7714d547dd646118"
	Jun 02 17:51:22 pause-20220602105035-2113 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 02 17:51:22 pause-20220602105035-2113 kubelet[1784]: I0602 17:51:22.000299    1784 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jun 02 17:51:22 pause-20220602105035-2113 systemd[1]: kubelet.service: Succeeded.
	Jun 02 17:51:22 pause-20220602105035-2113 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 02 17:51:22 pause-20220602105035-2113 systemd[1]: kubelet.service: Consumed 1.022s CPU time.
	
	* 
	* ==> storage-provisioner [8269f7092418] <==
	* I0602 17:51:20.253981       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 17:51:20.260517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 17:51:20.260597       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 17:51:20.271644       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 17:51:20.271866       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220602105035-2113_c7b1955a-0674-4115-82e2-cce29bf5cd57!
	I0602 17:51:20.271882       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"204d2ed9-5dfd-4041-89b2-907b4ba5f015", APIVersion:"v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220602105035-2113_c7b1955a-0674-4115-82e2-cce29bf5cd57 became leader
	I0602 17:51:20.372044       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220602105035-2113_c7b1955a-0674-4115-82e2-cce29bf5cd57!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 10:52:07.738590   10940 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220602105035-2113 -n pause-20220602105035-2113
E0602 10:52:12.882982    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:52:15.657037    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220602105035-2113 -n pause-20220602105035-2113: exit status 2 (16.120693971s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220602105035-2113" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/VerifyStatus (62.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (220.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220602104456-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cilium-20220602104456-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : exit status 80 (3m40.643818763s)

                                                
                                                
-- stdout --
	* [cilium-20220602104456-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cilium-20220602104456-2113 in cluster cilium-20220602104456-2113
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20220602104456-2113" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:54:38.390767   11996 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:54:38.390929   11996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:54:38.390934   11996 out.go:309] Setting ErrFile to fd 2...
	I0602 10:54:38.390938   11996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:54:38.391025   11996 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:54:38.391352   11996 out.go:303] Setting JSON to false
	I0602 10:54:38.407466   11996 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3248,"bootTime":1654189230,"procs":349,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:54:38.407543   11996 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:54:38.429438   11996 out.go:177] * [cilium-20220602104456-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:54:38.451398   11996 notify.go:193] Checking for updates...
	I0602 10:54:38.451420   11996 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 10:54:38.473363   11996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:54:38.495199   11996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:54:38.516348   11996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:54:38.537245   11996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 10:54:38.559176   11996 config.go:178] Loaded profile config "kindnet-20220602104455-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:54:38.559269   11996 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 10:54:38.630457   11996 docker.go:137] docker version: linux-20.10.14
	I0602 10:54:38.630597   11996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:54:38.756490   11996 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 17:54:38.696420092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:54:38.799122   11996 out.go:177] * Using the docker driver based on user configuration
	I0602 10:54:38.820145   11996 start.go:284] selected driver: docker
	I0602 10:54:38.820202   11996 start.go:806] validating driver "docker" against <nil>
	I0602 10:54:38.820228   11996 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 10:54:38.823781   11996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:54:38.949569   11996 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 17:54:38.889837441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:54:38.949683   11996 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 10:54:38.949877   11996 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 10:54:38.971445   11996 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 10:54:38.993455   11996 cni.go:95] Creating CNI manager for "cilium"
	I0602 10:54:38.993476   11996 start_flags.go:301] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0602 10:54:38.993493   11996 start_flags.go:306] config:
	{Name:cilium-20220602104456-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220602104456-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:54:39.015301   11996 out.go:177] * Starting control plane node cilium-20220602104456-2113 in cluster cilium-20220602104456-2113
	I0602 10:54:39.057361   11996 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 10:54:39.078486   11996 out.go:177] * Pulling base image ...
	I0602 10:54:39.120265   11996 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 10:54:39.120266   11996 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 10:54:39.120329   11996 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 10:54:39.120349   11996 cache.go:57] Caching tarball of preloaded images
	I0602 10:54:39.120510   11996 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 10:54:39.120533   11996 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 10:54:39.121172   11996 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602104456-2113/config.json ...
	I0602 10:54:39.121243   11996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602104456-2113/config.json: {Name:mkf3babbc09e2988db48d9bc7f0ac73ab797385c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:54:39.185354   11996 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 10:54:39.185374   11996 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 10:54:39.185382   11996 cache.go:206] Successfully downloaded all kic artifacts
	I0602 10:54:39.185430   11996 start.go:352] acquiring machines lock for cilium-20220602104456-2113: {Name:mk0d22ee1752d7423f50dae21265305f296b919d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:54:39.185576   11996 start.go:356] acquired machines lock for "cilium-20220602104456-2113" in 134.575µs
	I0602 10:54:39.185603   11996 start.go:91] Provisioning new machine with config: &{Name:cilium-20220602104456-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220602104456-2113 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 10:54:39.185686   11996 start.go:131] createHost starting for "" (driver="docker")
	I0602 10:54:39.229200   11996 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 10:54:39.229604   11996 start.go:165] libmachine.API.Create for "cilium-20220602104456-2113" (driver="docker")
	I0602 10:54:39.229661   11996 client.go:168] LocalClient.Create starting
	I0602 10:54:39.229847   11996 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 10:54:39.229918   11996 main.go:134] libmachine: Decoding PEM data...
	I0602 10:54:39.229941   11996 main.go:134] libmachine: Parsing certificate...
	I0602 10:54:39.230048   11996 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 10:54:39.230096   11996 main.go:134] libmachine: Decoding PEM data...
	I0602 10:54:39.230111   11996 main.go:134] libmachine: Parsing certificate...
	I0602 10:54:39.231049   11996 cli_runner.go:164] Run: docker network inspect cilium-20220602104456-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 10:54:39.294451   11996 cli_runner.go:211] docker network inspect cilium-20220602104456-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 10:54:39.294534   11996 network_create.go:272] running [docker network inspect cilium-20220602104456-2113] to gather additional debugging logs...
	I0602 10:54:39.294551   11996 cli_runner.go:164] Run: docker network inspect cilium-20220602104456-2113
	W0602 10:54:39.356373   11996 cli_runner.go:211] docker network inspect cilium-20220602104456-2113 returned with exit code 1
	I0602 10:54:39.356395   11996 network_create.go:275] error running [docker network inspect cilium-20220602104456-2113]: docker network inspect cilium-20220602104456-2113: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220602104456-2113
	I0602 10:54:39.356411   11996 network_create.go:277] output of [docker network inspect cilium-20220602104456-2113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220602104456-2113
	
	** /stderr **
	I0602 10:54:39.356498   11996 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 10:54:39.419845   11996 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00059ea98] misses:0}
	I0602 10:54:39.419881   11996 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:54:39.419896   11996 network_create.go:115] attempt to create docker network cilium-20220602104456-2113 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 10:54:39.419953   11996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220602104456-2113
	I0602 10:54:39.513315   11996 network_create.go:99] docker network cilium-20220602104456-2113 192.168.49.0/24 created
	I0602 10:54:39.513347   11996 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20220602104456-2113" container
	I0602 10:54:39.513430   11996 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 10:54:39.579680   11996 cli_runner.go:164] Run: docker volume create cilium-20220602104456-2113 --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true
	I0602 10:54:39.641921   11996 oci.go:103] Successfully created a docker volume cilium-20220602104456-2113
	I0602 10:54:39.642029   11996 cli_runner.go:164] Run: docker run --rm --name cilium-20220602104456-2113-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --entrypoint /usr/bin/test -v cilium-20220602104456-2113:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 10:54:40.148549   11996 oci.go:107] Successfully prepared a docker volume cilium-20220602104456-2113
	I0602 10:54:40.148588   11996 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 10:54:40.148603   11996 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 10:54:40.148729   11996 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220602104456-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 10:54:44.805447   11996 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220602104456-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (4.656643088s)
	I0602 10:54:44.805468   11996 kic.go:188] duration metric: took 4.656849 seconds to extract preloaded images to volume
	I0602 10:54:44.805559   11996 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 10:54:44.944462   11996 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.49.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	W0602 10:54:45.161456   11996 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.49.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 returned with exit code 125
	I0602 10:54:45.161508   11996 client.go:171] LocalClient.Create took 5.931818047s
	I0602 10:54:47.162286   11996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:54:47.162352   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:54:47.226486   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:54:47.226586   11996 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:54:47.503007   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:54:47.580765   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:54:47.580845   11996 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:54:48.121244   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:54:48.195130   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:54:48.195216   11996 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:54:48.850475   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:54:48.921575   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	W0602 10:54:48.921666   11996 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 10:54:48.921681   11996 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:54:48.921740   11996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:54:48.921817   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:54:48.993973   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:54:48.994093   11996 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:54:49.225433   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:54:49.296544   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:54:49.296616   11996 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:54:49.741886   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:54:49.812202   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:54:49.812289   11996 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:54:50.130990   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:54:50.204313   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:54:50.204390   11996 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:54:50.758480   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:54:50.833532   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	W0602 10:54:50.833630   11996 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 10:54:50.833649   11996 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:54:50.833658   11996 start.go:134] duration metric: createHost completed in 11.647930261s
	I0602 10:54:50.833665   11996 start.go:81] releasing machines lock for "cilium-20220602104456-2113", held for 11.648044471s
	W0602 10:54:50.833681   11996 start.go:599] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.49.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: exit status 1
25
	stdout:
	6e15931d5cc111c970e1c737eef10ed423e8308ab2ee467ada0a1d4b7e300813
	
	stderr:
	docker: Error response from daemon: network cilium-20220602104456-2113 not found.
	I0602 10:54:50.834186   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	W0602 10:54:50.906905   11996 start.go:604] delete host: Docker machine "cilium-20220602104456-2113" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0602 10:54:50.907167   11996 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.49.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc34
96: exit status 125
	stdout:
	6e15931d5cc111c970e1c737eef10ed423e8308ab2ee467ada0a1d4b7e300813
	
	stderr:
	docker: Error response from daemon: network cilium-20220602104456-2113 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.49.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: exit status 125
	stdout:
	6e15931d5cc111c970e1c737eef10ed423e8308ab2ee467ada0a1d4b7e300813
	
	stderr:
	docker: Error response from daemon: network cilium-20220602104456-2113 not found.
	
	I0602 10:54:50.907190   11996 start.go:614] Will try again in 5 seconds ...
	I0602 10:54:55.907766   11996 start.go:352] acquiring machines lock for cilium-20220602104456-2113: {Name:mk0d22ee1752d7423f50dae21265305f296b919d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:54:55.907890   11996 start.go:356] acquired machines lock for "cilium-20220602104456-2113" in 99.789µs
	I0602 10:54:55.907919   11996 start.go:94] Skipping create...Using existing machine configuration
	I0602 10:54:55.907927   11996 fix.go:55] fixHost starting: 
	I0602 10:54:55.908188   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:54:55.977664   11996 fix.go:103] recreateIfNeeded on cilium-20220602104456-2113: state= err=<nil>
	I0602 10:54:55.977700   11996 fix.go:108] machineExists: false. err=machine does not exist
	I0602 10:54:55.998463   11996 out.go:177] * docker "cilium-20220602104456-2113" container is missing, will recreate.
	I0602 10:54:56.040484   11996 delete.go:124] DEMOLISHING cilium-20220602104456-2113 ...
	I0602 10:54:56.040671   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:54:56.109656   11996 stop.go:79] host is in state 
	I0602 10:54:56.109717   11996 main.go:134] libmachine: Stopping "cilium-20220602104456-2113"...
	I0602 10:54:56.109806   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:54:56.178086   11996 kic_runner.go:93] Run: systemctl --version
	I0602 10:54:56.178104   11996 kic_runner.go:114] Args: [docker exec --privileged cilium-20220602104456-2113 systemctl --version]
	I0602 10:54:56.252056   11996 kic_runner.go:93] Run: sudo service kubelet stop
	I0602 10:54:56.252073   11996 kic_runner.go:114] Args: [docker exec --privileged cilium-20220602104456-2113 sudo service kubelet stop]
	I0602 10:54:56.324467   11996 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 6e15931d5cc111c970e1c737eef10ed423e8308ab2ee467ada0a1d4b7e300813 is not running
	
	** /stderr **
	W0602 10:54:56.324481   11996 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 6e15931d5cc111c970e1c737eef10ed423e8308ab2ee467ada0a1d4b7e300813 is not running
	I0602 10:54:56.324571   11996 kic_runner.go:93] Run: sudo service kubelet stop
	I0602 10:54:56.324580   11996 kic_runner.go:114] Args: [docker exec --privileged cilium-20220602104456-2113 sudo service kubelet stop]
	I0602 10:54:56.398738   11996 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 6e15931d5cc111c970e1c737eef10ed423e8308ab2ee467ada0a1d4b7e300813 is not running
	
	** /stderr **
	W0602 10:54:56.398795   11996 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 6e15931d5cc111c970e1c737eef10ed423e8308ab2ee467ada0a1d4b7e300813 is not running
	I0602 10:54:56.398912   11996 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0602 10:54:56.398922   11996 kic_runner.go:114] Args: [docker exec --privileged cilium-20220602104456-2113 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0602 10:54:56.473497   11996 kic.go:452] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 6e15931d5cc111c970e1c737eef10ed423e8308ab2ee467ada0a1d4b7e300813 is not running
	I0602 10:54:56.473517   11996 kic.go:462] successfully stopped kubernetes!
	I0602 10:54:56.473641   11996 kic_runner.go:93] Run: pgrep kube-apiserver
	I0602 10:54:56.473649   11996 kic_runner.go:114] Args: [docker exec --privileged cilium-20220602104456-2113 pgrep kube-apiserver]
	I0602 10:54:56.645934   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:54:59.720066   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:02.788994   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:05.860286   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:08.928550   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:11.997511   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:15.065682   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:18.134489   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:21.202133   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:24.282766   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:27.359562   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:30.431940   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:33.508159   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:36.580595   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:39.650304   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:42.720368   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:45.789036   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:48.860402   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:51.931577   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:55.003276   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:55:58.072248   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:01.267467   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:04.344628   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:07.412957   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:10.481210   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:13.545681   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:16.611499   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:19.684150   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:22.759783   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:25.831834   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:28.902843   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:31.972911   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:35.043690   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:38.111840   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:41.181864   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:44.251659   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:47.320272   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:50.390576   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:53.461633   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:56.659703   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:56:59.739446   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:02.808888   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:05.882482   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:08.956841   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:12.026852   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:15.092219   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:18.161160   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:21.224443   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:24.296805   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:27.406266   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:30.474783   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:33.543088   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:36.616718   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:39.690438   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:42.761047   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:45.830459   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:48.899285   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:51.972702   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:55.044367   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:57:58.115294   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:58:01.187097   11996 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0602 10:58:01.187163   11996 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0602 10:58:01.187879   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	W0602 10:58:01.256114   11996 delete.go:135] deletehost failed: Docker machine "cilium-20220602104456-2113" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0602 10:58:01.256268   11996 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220602104456-2113
	I0602 10:58:01.323355   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:58:01.390381   11996 cli_runner.go:164] Run: docker exec --privileged -t cilium-20220602104456-2113 /bin/bash -c "sudo init 0"
	W0602 10:58:01.464527   11996 cli_runner.go:211] docker exec --privileged -t cilium-20220602104456-2113 /bin/bash -c "sudo init 0" returned with exit code 1
	I0602 10:58:01.464553   11996 oci.go:625] error shutdown cilium-20220602104456-2113: docker exec --privileged -t cilium-20220602104456-2113 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 6e15931d5cc111c970e1c737eef10ed423e8308ab2ee467ada0a1d4b7e300813 is not running
	I0602 10:58:02.465052   11996 cli_runner.go:164] Run: docker container inspect cilium-20220602104456-2113 --format={{.State.Status}}
	I0602 10:58:02.534124   11996 oci.go:639] temporary error: container cilium-20220602104456-2113 status is  but expect it to be exited
	I0602 10:58:02.534143   11996 oci.go:645] Successfully shutdown container cilium-20220602104456-2113
	I0602 10:58:02.534241   11996 cli_runner.go:164] Run: docker rm -f -v cilium-20220602104456-2113
	I0602 10:58:02.604122   11996 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220602104456-2113
	W0602 10:58:02.668617   11996 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:02.668734   11996 cli_runner.go:164] Run: docker network inspect cilium-20220602104456-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 10:58:02.734655   11996 cli_runner.go:211] docker network inspect cilium-20220602104456-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 10:58:02.734733   11996 network_create.go:272] running [docker network inspect cilium-20220602104456-2113] to gather additional debugging logs...
	I0602 10:58:02.734763   11996 cli_runner.go:164] Run: docker network inspect cilium-20220602104456-2113
	W0602 10:58:02.796402   11996 cli_runner.go:211] docker network inspect cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:02.796424   11996 network_create.go:275] error running [docker network inspect cilium-20220602104456-2113]: docker network inspect cilium-20220602104456-2113: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220602104456-2113
	I0602 10:58:02.796456   11996 network_create.go:277] output of [docker network inspect cilium-20220602104456-2113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220602104456-2113
	
	** /stderr **
	W0602 10:58:02.796734   11996 delete.go:139] delete failed (probably ok) <nil>
	I0602 10:58:02.796741   11996 fix.go:115] Sleeping 1 second for extra luck!
	I0602 10:58:03.796983   11996 start.go:131] createHost starting for "" (driver="docker")
	I0602 10:58:03.819062   11996 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 10:58:03.819230   11996 start.go:165] libmachine.API.Create for "cilium-20220602104456-2113" (driver="docker")
	I0602 10:58:03.819281   11996 client.go:168] LocalClient.Create starting
	I0602 10:58:03.819416   11996 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 10:58:03.819488   11996 main.go:134] libmachine: Decoding PEM data...
	I0602 10:58:03.819536   11996 main.go:134] libmachine: Parsing certificate...
	I0602 10:58:03.819641   11996 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 10:58:03.819688   11996 main.go:134] libmachine: Decoding PEM data...
	I0602 10:58:03.819709   11996 main.go:134] libmachine: Parsing certificate...
	I0602 10:58:03.840971   11996 cli_runner.go:164] Run: docker network inspect cilium-20220602104456-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 10:58:03.903927   11996 cli_runner.go:211] docker network inspect cilium-20220602104456-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 10:58:03.904000   11996 network_create.go:272] running [docker network inspect cilium-20220602104456-2113] to gather additional debugging logs...
	I0602 10:58:03.904019   11996 cli_runner.go:164] Run: docker network inspect cilium-20220602104456-2113
	W0602 10:58:03.966405   11996 cli_runner.go:211] docker network inspect cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:03.966425   11996 network_create.go:275] error running [docker network inspect cilium-20220602104456-2113]: docker network inspect cilium-20220602104456-2113: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220602104456-2113
	I0602 10:58:03.966466   11996 network_create.go:277] output of [docker network inspect cilium-20220602104456-2113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220602104456-2113
	
	** /stderr **
	I0602 10:58:03.966537   11996 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 10:58:04.029400   11996 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00059ea98] amended:false}} dirty:map[] misses:0}
	I0602 10:58:04.029428   11996 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:58:04.029463   11996 network_create.go:115] attempt to create docker network cilium-20220602104456-2113 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 10:58:04.029552   11996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220602104456-2113
	W0602 10:58:04.092158   11996 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220602104456-2113 returned with exit code 1
	W0602 10:58:04.092197   11996 network_create.go:107] failed to create docker network cilium-20220602104456-2113 192.168.49.0/24, will retry: subnet is taken
	I0602 10:58:04.092509   11996 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00059ea98] amended:false}} dirty:map[] misses:0}
	I0602 10:58:04.092528   11996 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:58:04.092764   11996 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00059ea98] amended:true}} dirty:map[192.168.49.0:0xc00059ea98 192.168.58.0:0xc000bbe2a8] misses:0}
	I0602 10:58:04.092783   11996 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:58:04.092789   11996 network_create.go:115] attempt to create docker network cilium-20220602104456-2113 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0602 10:58:04.092852   11996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220602104456-2113
	I0602 10:58:04.187700   11996 network_create.go:99] docker network cilium-20220602104456-2113 192.168.58.0/24 created
	I0602 10:58:04.187734   11996 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20220602104456-2113" container
	I0602 10:58:04.187810   11996 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 10:58:04.255323   11996 cli_runner.go:164] Run: docker volume create cilium-20220602104456-2113 --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true
	I0602 10:58:04.318104   11996 oci.go:103] Successfully created a docker volume cilium-20220602104456-2113
	I0602 10:58:04.318245   11996 cli_runner.go:164] Run: docker run --rm --name cilium-20220602104456-2113-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --entrypoint /usr/bin/test -v cilium-20220602104456-2113:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 10:58:04.702831   11996 oci.go:107] Successfully prepared a docker volume cilium-20220602104456-2113
	I0602 10:58:04.702879   11996 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 10:58:04.702893   11996 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 10:58:04.703087   11996 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220602104456-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 10:58:10.040271   11996 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220602104456-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (5.337101496s)
	I0602 10:58:10.040291   11996 kic.go:188] duration metric: took 5.337379 seconds to extract preloaded images to volume
	I0602 10:58:10.040386   11996 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 10:58:10.182727   11996 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.58.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	W0602 10:58:10.318735   11996 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.58.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 returned with exit code 125
	I0602 10:58:10.318808   11996 client.go:171] LocalClient.Create took 6.499500831s
	I0602 10:58:12.319627   11996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:58:12.319711   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:12.391361   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:12.391436   11996 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:12.592087   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:12.666009   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:12.666100   11996 retry.go:31] will retry after 380.704736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:13.047052   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:13.116796   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:13.116875   11996 retry.go:31] will retry after 738.922478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:13.856026   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:13.928406   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	W0602 10:58:13.928491   11996 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 10:58:13.928503   11996 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:13.928552   11996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:58:13.928598   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:13.999874   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:13.999958   11996 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:14.222319   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:14.307860   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:14.308008   11996 retry.go:31] will retry after 306.771815ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:14.615558   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:14.680290   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:14.680425   11996 retry.go:31] will retry after 545.000538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:15.225736   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:15.295467   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	W0602 10:58:15.295561   11996 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 10:58:15.295576   11996 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:15.295584   11996 start.go:134] duration metric: createHost completed in 11.498544388s
	I0602 10:58:15.295680   11996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:58:15.295740   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:15.366488   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:15.366572   11996 retry.go:31] will retry after 198.275464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:15.566868   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:15.631597   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:15.631683   11996 retry.go:31] will retry after 442.156754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:16.074052   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:16.148403   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:16.148479   11996 retry.go:31] will retry after 404.186092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:16.552911   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:16.620646   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:16.620735   11996 retry.go:31] will retry after 593.313927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:17.214337   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:17.281258   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	W0602 10:58:17.281336   11996 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 10:58:17.281349   11996 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:17.281414   11996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:58:17.281472   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:17.349922   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:17.350000   11996 retry.go:31] will retry after 267.668319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:17.619853   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:17.687853   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:17.687936   11996 retry.go:31] will retry after 510.934289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:18.199467   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:18.268791   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	I0602 10:58:18.268910   11996 retry.go:31] will retry after 446.126762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:18.716819   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113
	W0602 10:58:18.803790   11996 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602104456-2113 returned with exit code 1
	W0602 10:58:18.803870   11996 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 10:58:18.803885   11996 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 10:58:18.803894   11996 fix.go:57] fixHost completed within 3m22.895330454s
	I0602 10:58:18.803900   11996 start.go:81] releasing machines lock for "cilium-20220602104456-2113", held for 3m22.895365533s
	W0602 10:58:18.804086   11996 out.go:239] * Failed to start docker container. Running "minikube delete -p cilium-20220602104456-2113" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.58.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-1
4252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: exit status 125
	stdout:
	6d42a6d7555a678b8ccae1bd53fe6ded8dd65ca55ca06ac559b22cdf15ddb4f3
	
	stderr:
	docker: Error response from daemon: network cilium-20220602104456-2113 not found.
	
	* Failed to start docker container. Running "minikube delete -p cilium-20220602104456-2113" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.58.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf527
4136c9036e4954d4d6fe2b32ad73fc3496: exit status 125
	stdout:
	6d42a6d7555a678b8ccae1bd53fe6ded8dd65ca55ca06ac559b22cdf15ddb4f3
	
	stderr:
	docker: Error response from daemon: network cilium-20220602104456-2113 not found.
	
	I0602 10:58:18.845602   11996 out.go:177] 
	W0602 10:58:18.872157   11996 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.58.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9
036e4954d4d6fe2b32ad73fc3496: exit status 125
	stdout:
	6d42a6d7555a678b8ccae1bd53fe6ded8dd65ca55ca06ac559b22cdf15ddb4f3
	
	stderr:
	docker: Error response from daemon: network cilium-20220602104456-2113 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602104456-2113 --name cilium-20220602104456-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602104456-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602104456-2113 --network cilium-20220602104456-2113 --ip 192.168.58.2 --volume cilium-20220602104456-2113:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: exit status
125
	stdout:
	6d42a6d7555a678b8ccae1bd53fe6ded8dd65ca55ca06ac559b22cdf15ddb4f3
	
	stderr:
	docker: Error response from daemon: network cilium-20220602104456-2113 not found.
	
	W0602 10:58:18.872200   11996 out.go:239] * 
	* 
	W0602 10:58:18.873365   11996 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 10:58:18.941057   11996 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (220.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (250.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220602105906-2113 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220602105906-2113 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m10.18308406s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220602105906-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node old-k8s-version-20220602105906-2113 in cluster old-k8s-version-20220602105906-2113
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:59:06.657434   13177 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:59:06.658001   13177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:59:06.658010   13177 out.go:309] Setting ErrFile to fd 2...
	I0602 10:59:06.658017   13177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:59:06.658246   13177 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:59:06.658808   13177 out.go:303] Setting JSON to false
	I0602 10:59:06.675121   13177 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3516,"bootTime":1654189230,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:59:06.675232   13177 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:59:06.698712   13177 out.go:177] * [old-k8s-version-20220602105906-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:59:06.741203   13177 notify.go:193] Checking for updates...
	I0602 10:59:06.762808   13177 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 10:59:06.805996   13177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:59:06.850136   13177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:59:06.894058   13177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:59:06.958916   13177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 10:59:06.981617   13177 config.go:178] Loaded profile config "kubenet-20220602104455-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:59:06.981713   13177 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 10:59:07.055616   13177 docker.go:137] docker version: linux-20.10.14
	I0602 10:59:07.055769   13177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:59:07.185207   13177 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 17:59:07.124014507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:59:07.207878   13177 out.go:177] * Using the docker driver based on user configuration
	I0602 10:59:07.232658   13177 start.go:284] selected driver: docker
	I0602 10:59:07.232671   13177 start.go:806] validating driver "docker" against <nil>
	I0602 10:59:07.232686   13177 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 10:59:07.235060   13177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:59:07.361866   13177 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 17:59:07.302110208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:59:07.362007   13177 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 10:59:07.362160   13177 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 10:59:07.383874   13177 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 10:59:07.403911   13177 cni.go:95] Creating CNI manager for ""
	I0602 10:59:07.403937   13177 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:59:07.403952   13177 start_flags.go:306] config:
	{Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:59:07.425901   13177 out.go:177] * Starting control plane node old-k8s-version-20220602105906-2113 in cluster old-k8s-version-20220602105906-2113
	I0602 10:59:07.500041   13177 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 10:59:07.521820   13177 out.go:177] * Pulling base image ...
	I0602 10:59:07.595987   13177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 10:59:07.596005   13177 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 10:59:07.596069   13177 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 10:59:07.596093   13177 cache.go:57] Caching tarball of preloaded images
	I0602 10:59:07.596303   13177 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 10:59:07.596335   13177 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0602 10:59:07.597330   13177 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/config.json ...
	I0602 10:59:07.597479   13177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/config.json: {Name:mkb136ecb8eeca70f0ce5d2277193a14ddb90569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:59:07.666857   13177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 10:59:07.666913   13177 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 10:59:07.666926   13177 cache.go:206] Successfully downloaded all kic artifacts
	I0602 10:59:07.667002   13177 start.go:352] acquiring machines lock for old-k8s-version-20220602105906-2113: {Name:mk7f6a3ed7e2845a9fdc2d9a313dfa02067477c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:59:07.667177   13177 start.go:356] acquired machines lock for "old-k8s-version-20220602105906-2113" in 162.28µs
	I0602 10:59:07.667210   13177 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 10:59:07.667282   13177 start.go:131] createHost starting for "" (driver="docker")
	I0602 10:59:07.679687   13177 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0602 10:59:07.679984   13177 start.go:165] libmachine.API.Create for "old-k8s-version-20220602105906-2113" (driver="docker")
	I0602 10:59:07.680026   13177 client.go:168] LocalClient.Create starting
	I0602 10:59:07.680136   13177 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 10:59:07.680196   13177 main.go:134] libmachine: Decoding PEM data...
	I0602 10:59:07.680219   13177 main.go:134] libmachine: Parsing certificate...
	I0602 10:59:07.680324   13177 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 10:59:07.680371   13177 main.go:134] libmachine: Decoding PEM data...
	I0602 10:59:07.680391   13177 main.go:134] libmachine: Parsing certificate...
	I0602 10:59:07.681149   13177 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220602105906-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 10:59:07.745900   13177 cli_runner.go:211] docker network inspect old-k8s-version-20220602105906-2113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 10:59:07.746020   13177 network_create.go:272] running [docker network inspect old-k8s-version-20220602105906-2113] to gather additional debugging logs...
	I0602 10:59:07.746042   13177 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220602105906-2113
	W0602 10:59:07.810379   13177 cli_runner.go:211] docker network inspect old-k8s-version-20220602105906-2113 returned with exit code 1
	I0602 10:59:07.810403   13177 network_create.go:275] error running [docker network inspect old-k8s-version-20220602105906-2113]: docker network inspect old-k8s-version-20220602105906-2113: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220602105906-2113
	I0602 10:59:07.810434   13177 network_create.go:277] output of [docker network inspect old-k8s-version-20220602105906-2113]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220602105906-2113
	
	** /stderr **
	I0602 10:59:07.810531   13177 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 10:59:07.873589   13177 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0010024d0] misses:0}
	I0602 10:59:07.873625   13177 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 10:59:07.873640   13177 network_create.go:115] attempt to create docker network old-k8s-version-20220602105906-2113 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 10:59:07.873709   13177 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220602105906-2113
	I0602 10:59:07.969994   13177 network_create.go:99] docker network old-k8s-version-20220602105906-2113 192.168.49.0/24 created
	I0602 10:59:07.970029   13177 kic.go:106] calculated static IP "192.168.49.2" for the "old-k8s-version-20220602105906-2113" container
	I0602 10:59:07.970135   13177 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 10:59:08.036580   13177 cli_runner.go:164] Run: docker volume create old-k8s-version-20220602105906-2113 --label name.minikube.sigs.k8s.io=old-k8s-version-20220602105906-2113 --label created_by.minikube.sigs.k8s.io=true
	I0602 10:59:08.099596   13177 oci.go:103] Successfully created a docker volume old-k8s-version-20220602105906-2113
	I0602 10:59:08.099698   13177 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220602105906-2113-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220602105906-2113 --entrypoint /usr/bin/test -v old-k8s-version-20220602105906-2113:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 10:59:08.582372   13177 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220602105906-2113
	I0602 10:59:08.582546   13177 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 10:59:08.582560   13177 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 10:59:08.582665   13177 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220602105906-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 10:59:12.539817   13177 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220602105906-2113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (3.95704567s)
	I0602 10:59:12.539838   13177 kic.go:188] duration metric: took 3.957265 seconds to extract preloaded images to volume
	I0602 10:59:12.539945   13177 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 10:59:12.684369   13177 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220602105906-2113 --name old-k8s-version-20220602105906-2113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220602105906-2113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220602105906-2113 --network old-k8s-version-20220602105906-2113 --ip 192.168.49.2 --volume old-k8s-version-20220602105906-2113:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 10:59:13.072913   13177 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Running}}
	I0602 10:59:13.148864   13177 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Status}}
	I0602 10:59:13.223304   13177 cli_runner.go:164] Run: docker exec old-k8s-version-20220602105906-2113 stat /var/lib/dpkg/alternatives/iptables
	I0602 10:59:13.357947   13177 oci.go:247] the created container "old-k8s-version-20220602105906-2113" has a running status.
	I0602 10:59:13.357980   13177 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa...
	I0602 10:59:13.414189   13177 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 10:59:13.526682   13177 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Status}}
	I0602 10:59:13.598010   13177 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 10:59:13.598026   13177 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220602105906-2113 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 10:59:13.726776   13177 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Status}}
	I0602 10:59:13.797369   13177 machine.go:88] provisioning docker machine ...
	I0602 10:59:13.797410   13177 ubuntu.go:169] provisioning hostname "old-k8s-version-20220602105906-2113"
	I0602 10:59:13.797510   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:13.868185   13177 main.go:134] libmachine: Using SSH client type: native
	I0602 10:59:13.868374   13177 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51436 <nil> <nil>}
	I0602 10:59:13.868387   13177 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220602105906-2113 && echo "old-k8s-version-20220602105906-2113" | sudo tee /etc/hostname
	I0602 10:59:13.992654   13177 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220602105906-2113
	
	I0602 10:59:13.992727   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:14.065701   13177 main.go:134] libmachine: Using SSH client type: native
	I0602 10:59:14.065855   13177 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51436 <nil> <nil>}
	I0602 10:59:14.065870   13177 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220602105906-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220602105906-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220602105906-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 10:59:14.181889   13177 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 10:59:14.181908   13177 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 10:59:14.181924   13177 ubuntu.go:177] setting up certificates
	I0602 10:59:14.181931   13177 provision.go:83] configureAuth start
	I0602 10:59:14.181997   13177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 10:59:14.252980   13177 provision.go:138] copyHostCerts
	I0602 10:59:14.253051   13177 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 10:59:14.253059   13177 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 10:59:14.253156   13177 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 10:59:14.253348   13177 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 10:59:14.253357   13177 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 10:59:14.253413   13177 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 10:59:14.253548   13177 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 10:59:14.253554   13177 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 10:59:14.253611   13177 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 10:59:14.253727   13177 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220602105906-2113 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220602105906-2113]
	I0602 10:59:14.421635   13177 provision.go:172] copyRemoteCerts
	I0602 10:59:14.421708   13177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 10:59:14.421756   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:14.492125   13177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51436 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 10:59:14.575839   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 10:59:14.594156   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0602 10:59:14.611035   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 10:59:14.627518   13177 provision.go:86] duration metric: configureAuth took 445.572222ms
	I0602 10:59:14.627531   13177 ubuntu.go:193] setting minikube options for container-runtime
	I0602 10:59:14.627665   13177 config.go:178] Loaded profile config "old-k8s-version-20220602105906-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0602 10:59:14.627727   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:14.698964   13177 main.go:134] libmachine: Using SSH client type: native
	I0602 10:59:14.699112   13177 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51436 <nil> <nil>}
	I0602 10:59:14.699134   13177 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 10:59:14.818111   13177 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 10:59:14.818126   13177 ubuntu.go:71] root file system type: overlay
	I0602 10:59:14.818318   13177 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 10:59:14.818400   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:14.889854   13177 main.go:134] libmachine: Using SSH client type: native
	I0602 10:59:14.890002   13177 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51436 <nil> <nil>}
	I0602 10:59:14.890048   13177 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 10:59:15.016940   13177 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 10:59:15.017027   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:15.087961   13177 main.go:134] libmachine: Using SSH client type: native
	I0602 10:59:15.088172   13177 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51436 <nil> <nil>}
	I0602 10:59:15.088186   13177 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 10:59:15.728687   13177 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:59:15.027622837 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 10:59:15.728708   13177 machine.go:91] provisioned docker machine in 1.931313757s
	I0602 10:59:15.728714   13177 client.go:171] LocalClient.Create took 8.048657476s
	I0602 10:59:15.728733   13177 start.go:173] duration metric: libmachine.API.Create for "old-k8s-version-20220602105906-2113" took 8.048724227s
	I0602 10:59:15.728742   13177 start.go:306] post-start starting for "old-k8s-version-20220602105906-2113" (driver="docker")
	I0602 10:59:15.728746   13177 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 10:59:15.728810   13177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 10:59:15.728863   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:15.805013   13177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51436 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 10:59:15.891037   13177 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 10:59:15.895055   13177 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 10:59:15.895071   13177 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 10:59:15.895078   13177 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 10:59:15.895084   13177 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 10:59:15.895094   13177 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 10:59:15.895194   13177 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 10:59:15.895353   13177 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 10:59:15.895506   13177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 10:59:15.902935   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:59:15.922027   13177 start.go:309] post-start completed in 193.274177ms
	I0602 10:59:15.922617   13177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 10:59:16.029252   13177 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/config.json ...
	I0602 10:59:16.029692   13177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:59:16.029739   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:16.146164   13177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51436 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 10:59:16.230406   13177 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 10:59:16.235621   13177 start.go:134] duration metric: createHost completed in 8.568295975s
	I0602 10:59:16.235647   13177 start.go:81] releasing machines lock for "old-k8s-version-20220602105906-2113", held for 8.568426566s
	I0602 10:59:16.235734   13177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 10:59:16.308950   13177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 10:59:16.308951   13177 ssh_runner.go:195] Run: systemctl --version
	I0602 10:59:16.309028   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:16.309046   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:16.402124   13177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51436 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 10:59:16.406032   13177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51436 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 10:59:16.619169   13177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 10:59:16.629112   13177 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:59:16.638758   13177 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 10:59:16.638812   13177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 10:59:16.647929   13177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 10:59:16.660695   13177 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 10:59:16.751356   13177 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 10:59:16.829841   13177 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 10:59:16.840043   13177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 10:59:16.903632   13177 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 10:59:16.913051   13177 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:59:16.947538   13177 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 10:59:17.026426   13177 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0602 10:59:17.026629   13177 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220602105906-2113 dig +short host.docker.internal
	I0602 10:59:17.160820   13177 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 10:59:17.160925   13177 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 10:59:17.165297   13177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 10:59:17.174656   13177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 10:59:17.245072   13177 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 10:59:17.245152   13177 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:59:17.275798   13177 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 10:59:17.275815   13177 docker.go:541] Images already preloaded, skipping extraction
	I0602 10:59:17.275888   13177 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 10:59:17.307517   13177 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 10:59:17.307538   13177 cache_images.go:84] Images are preloaded, skipping loading
	I0602 10:59:17.307617   13177 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 10:59:17.380365   13177 cni.go:95] Creating CNI manager for ""
	I0602 10:59:17.380377   13177 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:59:17.380393   13177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 10:59:17.380408   13177 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220602105906-2113 NodeName:old-k8s-version-20220602105906-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 10:59:17.380527   13177 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220602105906-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220602105906-2113
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 10:59:17.380610   13177 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220602105906-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 10:59:17.380668   13177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0602 10:59:17.388144   13177 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 10:59:17.388215   13177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 10:59:17.394985   13177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0602 10:59:17.407709   13177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 10:59:17.422890   13177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0602 10:59:17.436128   13177 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 10:59:17.440054   13177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 10:59:17.449977   13177 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113 for IP: 192.168.49.2
	I0602 10:59:17.450086   13177 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 10:59:17.450163   13177 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 10:59:17.450207   13177 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/client.key
	I0602 10:59:17.450218   13177 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/client.crt with IP's: []
	I0602 10:59:17.597805   13177 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/client.crt ...
	I0602 10:59:17.597823   13177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/client.crt: {Name:mk24dfc4208d103a170b009305765797a60c9751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:59:17.598171   13177 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/client.key ...
	I0602 10:59:17.598181   13177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/client.key: {Name:mk5aac42c32754028a1b23f4de3c04d936003fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:59:17.598384   13177 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key.dd3b5fb2
	I0602 10:59:17.598400   13177 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 10:59:17.682203   13177 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.crt.dd3b5fb2 ...
	I0602 10:59:17.682225   13177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.crt.dd3b5fb2: {Name:mk694c076dff3dfb9e4632cb5c02a448e546f2b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:59:17.682528   13177 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key.dd3b5fb2 ...
	I0602 10:59:17.682540   13177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key.dd3b5fb2: {Name:mk52d5b8df46e2fab4059913b1813df05647207b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:59:17.682728   13177 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.crt
	I0602 10:59:17.682883   13177 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key
	I0602 10:59:17.683040   13177 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key
	I0602 10:59:17.683058   13177 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.crt with IP's: []
	I0602 10:59:17.790636   13177 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.crt ...
	I0602 10:59:17.790651   13177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.crt: {Name:mk49c628807abdc15faf69d5e5310ac4e948eb8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:59:17.790943   13177 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key ...
	I0602 10:59:17.790951   13177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key: {Name:mk59b8d85f462ff7c506268d4a426584ea6b4f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:59:17.791330   13177 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 10:59:17.791379   13177 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 10:59:17.791388   13177 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 10:59:17.791442   13177 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 10:59:17.791472   13177 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 10:59:17.791497   13177 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 10:59:17.791562   13177 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 10:59:17.792030   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 10:59:17.811815   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 10:59:17.831084   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 10:59:17.853345   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 10:59:17.906006   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 10:59:17.925618   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 10:59:17.943671   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 10:59:17.965076   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 10:59:17.983927   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 10:59:18.004672   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 10:59:18.024556   13177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 10:59:18.044285   13177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 10:59:18.059381   13177 ssh_runner.go:195] Run: openssl version
	I0602 10:59:18.065482   13177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 10:59:18.075188   13177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 10:59:18.080499   13177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 10:59:18.080566   13177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 10:59:18.088637   13177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 10:59:18.098436   13177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 10:59:18.109229   13177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 10:59:18.114368   13177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 10:59:18.114440   13177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 10:59:18.121097   13177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 10:59:18.131358   13177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 10:59:18.142095   13177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:59:18.146896   13177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:59:18.146940   13177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 10:59:18.152333   13177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 10:59:18.160866   13177 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false}
	I0602 10:59:18.160973   13177 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 10:59:18.193505   13177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 10:59:18.203770   13177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 10:59:18.211912   13177 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 10:59:18.211970   13177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 10:59:18.221678   13177 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 10:59:18.221710   13177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 10:59:18.988140   13177 out.go:204]   - Generating certificates and keys ...
	I0602 10:59:22.210488   13177 out.go:204]   - Booting up control plane ...
	W0602 11:01:17.201848   13177 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220602105906-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220602105906-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220602105906-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220602105906-2113 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0602 11:01:17.201892   13177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:01:17.624285   13177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:01:17.633934   13177 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:01:17.633982   13177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:01:17.643235   13177 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:01:17.643259   13177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:01:18.372150   13177 out.go:204]   - Generating certificates and keys ...
	I0602 11:01:19.270354   13177 out.go:204]   - Booting up control plane ...
	I0602 11:03:14.185019   13177 kubeadm.go:397] StartCluster complete in 3m55.943293008s
	I0602 11:03:14.185105   13177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:03:14.216948   13177 logs.go:274] 0 containers: []
	W0602 11:03:14.216960   13177 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:03:14.217018   13177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:03:14.249201   13177 logs.go:274] 0 containers: []
	W0602 11:03:14.249212   13177 logs.go:276] No container was found matching "etcd"
	I0602 11:03:14.249269   13177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:03:14.277701   13177 logs.go:274] 0 containers: []
	W0602 11:03:14.277713   13177 logs.go:276] No container was found matching "coredns"
	I0602 11:03:14.277771   13177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:03:14.306606   13177 logs.go:274] 0 containers: []
	W0602 11:03:14.306620   13177 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:03:14.306685   13177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:03:14.334573   13177 logs.go:274] 0 containers: []
	W0602 11:03:14.334587   13177 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:03:14.334653   13177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:03:14.362580   13177 logs.go:274] 0 containers: []
	W0602 11:03:14.362596   13177 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:03:14.362654   13177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:03:14.391535   13177 logs.go:274] 0 containers: []
	W0602 11:03:14.391548   13177 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:03:14.391609   13177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:03:14.421197   13177 logs.go:274] 0 containers: []
	W0602 11:03:14.421210   13177 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:03:14.421217   13177 logs.go:123] Gathering logs for kubelet ...
	I0602 11:03:14.421224   13177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:03:14.460945   13177 logs.go:123] Gathering logs for dmesg ...
	I0602 11:03:14.460959   13177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:03:14.472315   13177 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:03:14.472327   13177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:03:14.526860   13177 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:03:14.526871   13177 logs.go:123] Gathering logs for Docker ...
	I0602 11:03:14.526878   13177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:03:14.540678   13177 logs.go:123] Gathering logs for container status ...
	I0602 11:03:14.540691   13177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:03:16.592167   13177 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051429022s)
	W0602 11:03:16.592310   13177 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0602 11:03:16.592324   13177 out.go:239] * 
	* 
	W0602 11:03:16.592463   13177 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:03:16.592477   13177 out.go:239] * 
	* 
	W0602 11:03:16.593018   13177 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 11:03:16.645630   13177 out.go:177] 
	W0602 11:03:16.710926   13177 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:03:16.711065   13177 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0602 11:03:16.711143   13177 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0602 11:03:16.753453   13177 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220602105906-2113 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220602105906-2113
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220602105906-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07",
	        "Created": "2022-06-02T17:59:12.760386506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188985,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:59:13.075084596Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hostname",
	        "HostsPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hosts",
	        "LogPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07-json.log",
	        "Name": "/old-k8s-version-20220602105906-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220602105906-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220602105906-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220602105906-2113",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220602105906-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220602105906-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ad5958668e3fabbf5869c7b770d9fd84649ac2d61e58956e673a5bb6e9424ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51439"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51440"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7ad5958668e3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220602105906-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61b85e98188b",
	                        "old-k8s-version-20220602105906-2113"
	                    ],
	                    "NetworkID": "fefb74a76593392c8406a972f20a5745c2403bb46ee6809bd1a18584d4cbeee4",
	                    "EndpointID": "326af92ded60b2fe7c732d33b91fb01d5f8a286b5da115cabcd7e0800bad637e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 6 (449.650641ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 11:03:17.406260   13652 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220602105906-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220602105906-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (250.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220602105906-2113 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220602105906-2113 create -f testdata/busybox.yaml: exit status 1 (30.162821ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220602105906-2113" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context old-k8s-version-20220602105906-2113 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220602105906-2113
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220602105906-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07",
	        "Created": "2022-06-02T17:59:12.760386506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188985,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:59:13.075084596Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hostname",
	        "HostsPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hosts",
	        "LogPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07-json.log",
	        "Name": "/old-k8s-version-20220602105906-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220602105906-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220602105906-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220602105906-2113",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220602105906-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220602105906-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ad5958668e3fabbf5869c7b770d9fd84649ac2d61e58956e673a5bb6e9424ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51439"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51440"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7ad5958668e3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220602105906-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61b85e98188b",
	                        "old-k8s-version-20220602105906-2113"
	                    ],
	                    "NetworkID": "fefb74a76593392c8406a972f20a5745c2403bb46ee6809bd1a18584d4cbeee4",
	                    "EndpointID": "326af92ded60b2fe7c732d33b91fb01d5f8a286b5da115cabcd7e0800bad637e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 6 (464.883189ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 11:03:17.979689   13665 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220602105906-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220602105906-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220602105906-2113
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220602105906-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07",
	        "Created": "2022-06-02T17:59:12.760386506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188985,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:59:13.075084596Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hostname",
	        "HostsPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hosts",
	        "LogPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07-json.log",
	        "Name": "/old-k8s-version-20220602105906-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220602105906-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220602105906-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220602105906-2113",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220602105906-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220602105906-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ad5958668e3fabbf5869c7b770d9fd84649ac2d61e58956e673a5bb6e9424ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51439"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51440"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7ad5958668e3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220602105906-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61b85e98188b",
	                        "old-k8s-version-20220602105906-2113"
	                    ],
	                    "NetworkID": "fefb74a76593392c8406a972f20a5745c2403bb46ee6809bd1a18584d4cbeee4",
	                    "EndpointID": "326af92ded60b2fe7c732d33b91fb01d5f8a286b5da115cabcd7e0800bad637e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 6 (451.385316ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 11:03:18.505045   13678 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220602105906-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220602105906-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220602105906-2113 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0602 11:03:35.797818    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:36.105782    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:03:54.105752    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:54.110938    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:54.123266    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:54.143477    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:54.183661    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:54.264302    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:54.424969    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:54.747164    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:55.387803    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:56.668652    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:59.229763    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:03.529486    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:03.535352    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:03.547581    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:03.569769    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:03.609989    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:03.690316    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:03.850602    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:04.172885    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:04.350450    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:04.815151    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:06.095575    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:08.657896    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:12.684328    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 11:04:13.780260    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:14.590760    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:16.760882    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:20.054978    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:23.969618    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:24.021981    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:29.112298    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 11:04:35.073298    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:38.027695    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:04:44.504422    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
start_stop_delete_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220602105906-2113 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.141950938s)

                                                
                                                
-- stdout --
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:209: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220602105906-2113 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220602105906-2113 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220602105906-2113 describe deploy/metrics-server -n kube-system: exit status 1 (30.465225ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220602105906-2113" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220602105906-2113 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220602105906-2113
E0602 11:04:47.745045    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220602105906-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07",
	        "Created": "2022-06-02T17:59:12.760386506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188985,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:59:13.075084596Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hostname",
	        "HostsPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hosts",
	        "LogPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07-json.log",
	        "Name": "/old-k8s-version-20220602105906-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220602105906-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220602105906-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220602105906-2113",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220602105906-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220602105906-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ad5958668e3fabbf5869c7b770d9fd84649ac2d61e58956e673a5bb6e9424ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51439"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51440"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7ad5958668e3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220602105906-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61b85e98188b",
	                        "old-k8s-version-20220602105906-2113"
	                    ],
	                    "NetworkID": "fefb74a76593392c8406a972f20a5745c2403bb46ee6809bd1a18584d4cbeee4",
	                    "EndpointID": "326af92ded60b2fe7c732d33b91fb01d5f8a286b5da115cabcd7e0800bad637e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 6 (443.450582ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 11:04:48.195800   13738 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220602105906-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220602105906-2113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (491.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220602105906-2113 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0602 11:04:51.655690    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 11:05:16.034766    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:05:25.465596    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:05:38.682785    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:05:52.170909    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 11:05:52.266681    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220602105906-2113 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m6.507629765s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220602105906-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220602105906-2113 in cluster old-k8s-version-20220602105906-2113
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220602105906-2113" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 11:04:50.212912   13778 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:04:50.213271   13778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:04:50.213277   13778 out.go:309] Setting ErrFile to fd 2...
	I0602 11:04:50.213283   13778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:04:50.213377   13778 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:04:50.213641   13778 out.go:303] Setting JSON to false
	I0602 11:04:50.229375   13778 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3859,"bootTime":1654189231,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:04:50.229480   13778 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:04:50.251550   13778 out.go:177] * [old-k8s-version-20220602105906-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:04:50.294147   13778 notify.go:193] Checking for updates...
	I0602 11:04:50.315087   13778 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:04:50.336129   13778 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:04:50.357034   13778 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:04:50.399144   13778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:04:50.420008   13778 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:04:50.457779   13778 config.go:178] Loaded profile config "old-k8s-version-20220602105906-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0602 11:04:50.480033   13778 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0602 11:04:50.516984   13778 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:04:50.590398   13778 docker.go:137] docker version: linux-20.10.14
	I0602 11:04:50.590521   13778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:04:50.717181   13778 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:04:50.66469354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:04:50.739075   13778 out.go:177] * Using the docker driver based on existing profile
	I0602 11:04:50.759620   13778 start.go:284] selected driver: docker
	I0602 11:04:50.759645   13778 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:04:50.759795   13778 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:04:50.763139   13778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:04:50.890034   13778 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:04:50.837983116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:04:50.890213   13778 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:04:50.890234   13778 cni.go:95] Creating CNI manager for ""
	I0602 11:04:50.890242   13778 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:04:50.890251   13778 start_flags.go:306] config:
	{Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:04:50.932780   13778 out.go:177] * Starting control plane node old-k8s-version-20220602105906-2113 in cluster old-k8s-version-20220602105906-2113
	I0602 11:04:50.953659   13778 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:04:50.974798   13778 out.go:177] * Pulling base image ...
	I0602 11:04:51.016700   13778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 11:04:51.016726   13778 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:04:51.016784   13778 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 11:04:51.016813   13778 cache.go:57] Caching tarball of preloaded images
	I0602 11:04:51.016994   13778 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:04:51.017034   13778 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0602 11:04:51.017938   13778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/config.json ...
	I0602 11:04:51.082281   13778 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:04:51.082299   13778 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:04:51.082308   13778 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:04:51.082351   13778 start.go:352] acquiring machines lock for old-k8s-version-20220602105906-2113: {Name:mk7f6a3ed7e2845a9fdc2d9a313dfa02067477c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:04:51.082434   13778 start.go:356] acquired machines lock for "old-k8s-version-20220602105906-2113" in 59.982µs
	I0602 11:04:51.082454   13778 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:04:51.082463   13778 fix.go:55] fixHost starting: 
	I0602 11:04:51.082690   13778 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Status}}
	I0602 11:04:51.150104   13778 fix.go:103] recreateIfNeeded on old-k8s-version-20220602105906-2113: state=Stopped err=<nil>
	W0602 11:04:51.150141   13778 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:04:51.171923   13778 out.go:177] * Restarting existing docker container for "old-k8s-version-20220602105906-2113" ...
	I0602 11:04:51.192766   13778 cli_runner.go:164] Run: docker start old-k8s-version-20220602105906-2113
	I0602 11:04:51.562681   13778 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Status}}
	I0602 11:04:51.662883   13778 kic.go:416] container "old-k8s-version-20220602105906-2113" state is running.
	I0602 11:04:51.663452   13778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 11:04:51.737105   13778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/config.json ...
	I0602 11:04:51.737509   13778 machine.go:88] provisioning docker machine ...
	I0602 11:04:51.737549   13778 ubuntu.go:169] provisioning hostname "old-k8s-version-20220602105906-2113"
	I0602 11:04:51.737658   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:51.809463   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:51.809681   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:51.809694   13778 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220602105906-2113 && echo "old-k8s-version-20220602105906-2113" | sudo tee /etc/hostname
	I0602 11:04:51.932527   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220602105906-2113
	
	I0602 11:04:51.932606   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.004974   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.005104   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.005119   13778 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220602105906-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220602105906-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220602105906-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:04:52.121395   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:04:52.121423   13778 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:04:52.121455   13778 ubuntu.go:177] setting up certificates
	I0602 11:04:52.121472   13778 provision.go:83] configureAuth start
	I0602 11:04:52.121550   13778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 11:04:52.192336   13778 provision.go:138] copyHostCerts
	I0602 11:04:52.192420   13778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:04:52.192429   13778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:04:52.192520   13778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:04:52.192739   13778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:04:52.192752   13778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:04:52.192807   13778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:04:52.192939   13778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:04:52.192945   13778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:04:52.192998   13778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:04:52.193133   13778 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220602105906-2113 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220602105906-2113]
	I0602 11:04:52.320731   13778 provision.go:172] copyRemoteCerts
	I0602 11:04:52.320787   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:04:52.320827   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.392403   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:52.478826   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0602 11:04:52.497596   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:04:52.514656   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:04:52.533451   13778 provision.go:86] duration metric: configureAuth took 411.958536ms
	I0602 11:04:52.533463   13778 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:04:52.533626   13778 config.go:178] Loaded profile config "old-k8s-version-20220602105906-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0602 11:04:52.533686   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.603829   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.604076   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.604123   13778 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:04:52.720513   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:04:52.720529   13778 ubuntu.go:71] root file system type: overlay
	I0602 11:04:52.720687   13778 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:04:52.720759   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.791816   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.791987   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.792042   13778 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:04:52.916537   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:04:52.916616   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.986921   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.987077   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.987090   13778 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:04:53.105706   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:04:53.105744   13778 machine.go:91] provisioned docker machine in 1.368201682s
	I0602 11:04:53.105753   13778 start.go:306] post-start starting for "old-k8s-version-20220602105906-2113" (driver="docker")
	I0602 11:04:53.105759   13778 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:04:53.105828   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:04:53.105878   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.176368   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.262898   13778 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:04:53.266671   13778 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:04:53.266685   13778 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:04:53.266692   13778 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:04:53.266697   13778 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:04:53.266705   13778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:04:53.266812   13778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:04:53.266949   13778 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:04:53.267114   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:04:53.274148   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:04:53.291722   13778 start.go:309] post-start completed in 185.950644ms
	I0602 11:04:53.291805   13778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:04:53.291855   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.362608   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.445871   13778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:04:53.450173   13778 fix.go:57] fixHost completed within 2.367659825s
	I0602 11:04:53.450189   13778 start.go:81] releasing machines lock for "old-k8s-version-20220602105906-2113", held for 2.367704829s
	I0602 11:04:53.450271   13778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 11:04:53.521262   13778 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:04:53.521302   13778 ssh_runner.go:195] Run: systemctl --version
	I0602 11:04:53.521350   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.521351   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.597060   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.598923   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.810081   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:04:53.822458   13778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:04:53.832182   13778 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:04:53.832234   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:04:53.841612   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:04:53.854258   13778 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:04:53.920122   13778 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:04:53.988970   13778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:04:53.999075   13778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:04:54.067007   13778 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:04:54.076634   13778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:04:54.111937   13778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:04:54.188721   13778 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0602 11:04:54.188859   13778 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220602105906-2113 dig +short host.docker.internal
	I0602 11:04:54.320880   13778 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:04:54.320996   13778 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:04:54.325104   13778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:04:54.334818   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:54.405836   13778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 11:04:54.405911   13778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:04:54.436193   13778 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 11:04:54.436207   13778 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:04:54.436280   13778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:04:54.467205   13778 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 11:04:54.467227   13778 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:04:54.467299   13778 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:04:54.542038   13778 cni.go:95] Creating CNI manager for ""
	I0602 11:04:54.542049   13778 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:04:54.542067   13778 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:04:54.542080   13778 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220602105906-2113 NodeName:old-k8s-version-20220602105906-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:04:54.542186   13778 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220602105906-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220602105906-2113
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:04:54.542264   13778 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220602105906-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 11:04:54.542338   13778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0602 11:04:54.550328   13778 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:04:54.550378   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:04:54.557754   13778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0602 11:04:54.570217   13778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:04:54.583212   13778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0602 11:04:54.595430   13778 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:04:54.598973   13778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:04:54.608290   13778 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113 for IP: 192.168.49.2
	I0602 11:04:54.608396   13778 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:04:54.608444   13778 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:04:54.608525   13778 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/client.key
	I0602 11:04:54.608588   13778 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key.dd3b5fb2
	I0602 11:04:54.608636   13778 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key
	I0602 11:04:54.608843   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:04:54.608888   13778 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:04:54.608900   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:04:54.608937   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:04:54.608966   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:04:54.608997   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:04:54.609062   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:04:54.609636   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:04:54.626606   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 11:04:54.643214   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:04:54.660634   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:04:54.678739   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:04:54.701311   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:04:54.718932   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:04:54.736064   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:04:54.752603   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:04:54.771409   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:04:54.788319   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:04:54.805672   13778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:04:54.819496   13778 ssh_runner.go:195] Run: openssl version
	I0602 11:04:54.825123   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:04:54.832756   13778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:04:54.836487   13778 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:04:54.836529   13778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:04:54.841628   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:04:54.848799   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:04:54.856314   13778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:04:54.860364   13778 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:04:54.860406   13778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:04:54.865383   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:04:54.873566   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:04:54.881515   13778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:04:54.885348   13778 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:04:54.885384   13778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:04:54.890326   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:04:54.897388   13778 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:04:54.897507   13778 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:04:54.926275   13778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:04:54.933771   13778 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:04:54.933784   13778 kubeadm.go:626] restartCluster start
	I0602 11:04:54.933827   13778 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:04:54.941071   13778 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:54.941133   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:55.012069   13778 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220602105906-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:04:55.012243   13778 kubeconfig.go:127] "old-k8s-version-20220602105906-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:04:55.012551   13778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:04:55.013835   13778 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:04:55.021171   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.021223   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.029814   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.230022   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.239655   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.250586   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.430691   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.430839   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.443438   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.629918   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.630056   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.642922   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.830069   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.830146   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.839562   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.029929   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.030041   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.040636   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.230080   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.230187   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.240520   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.430805   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.430932   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.442009   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.630654   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.630783   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.641383   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.832024   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.832186   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.843733   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.030158   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.030295   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.040942   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.230556   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.230665   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.240962   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.430085   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.430185   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.440845   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.632018   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.632152   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.642712   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.832058   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.832177   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.842760   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:58.031624   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:58.031750   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:58.041861   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:58.041871   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:58.041914   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:58.050439   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:58.050451   13778 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:04:58.050460   13778 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:04:58.050517   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:04:58.078781   13778 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:04:58.088953   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:04:58.096401   13778 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5743 Jun  2 18:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5779 Jun  2 18:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5923 Jun  2 18:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jun  2 18:01 /etc/kubernetes/scheduler.conf
	
	I0602 11:04:58.096451   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 11:04:58.104096   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 11:04:58.111337   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 11:04:58.118781   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 11:04:58.125918   13778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:04:58.133559   13778 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:04:58.133572   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:58.183775   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:58.896537   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:59.102587   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:59.155939   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:59.209147   13778 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:04:59.209209   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:04:59.720023   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:00.218628   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:00.717988   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:01.217869   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:01.720091   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:02.218009   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:02.720026   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:03.218005   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:03.719549   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:04.218269   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:04.720072   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:05.218193   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:05.719036   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:06.218362   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:06.718089   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:07.218191   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:07.720187   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:08.218889   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:08.720174   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:09.218254   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:09.718308   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:10.218927   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:10.718179   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:11.218634   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:11.720188   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:12.218325   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:12.718650   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:13.219209   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:13.718177   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:14.220264   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:14.720245   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:15.218876   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:15.718362   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:16.218196   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:16.720267   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:17.218738   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:17.720295   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:18.219639   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:18.719893   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:19.220307   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:19.718573   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:20.218881   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:20.718810   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:21.218435   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:21.720434   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:22.218420   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:22.720437   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:23.218341   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:23.718492   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:24.219768   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:24.718807   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:25.218974   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:25.720400   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:26.218669   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:26.720487   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:27.220515   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:27.720403   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:28.218730   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:28.718903   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:29.218531   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:29.720002   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:30.219067   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:30.720510   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:31.219729   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:31.720605   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:32.218956   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:32.720577   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:33.218952   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:33.720422   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:34.219560   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:34.718658   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:35.219547   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:35.720592   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:36.219099   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:36.719579   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:37.220649   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:37.718593   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:38.219903   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:38.719838   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:39.219406   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:39.718563   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:40.218840   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:40.718801   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:41.218646   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:41.720566   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:42.220521   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:42.718687   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:43.218743   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:43.719443   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:44.218763   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:44.718717   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:45.219727   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:45.719434   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:46.218669   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:46.719292   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:47.218839   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:47.720682   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:48.219900   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:48.718703   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:49.218731   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:49.718948   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:50.219516   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:50.718836   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:51.218950   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:51.719045   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:52.220332   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:52.719306   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:53.219458   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:53.719131   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:54.219966   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:54.718927   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:55.219031   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:55.718981   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:56.220088   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:56.718966   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:57.219844   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:57.718981   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:58.221005   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:58.719195   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:59.220136   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:05:59.250806   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.250818   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:05:59.250893   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:05:59.280792   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.280803   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:05:59.280863   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:05:59.308900   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.308911   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:05:59.308972   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:05:59.337622   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.337634   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:05:59.337694   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:05:59.368293   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.368306   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:05:59.368364   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:05:59.396426   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.396439   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:05:59.396499   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:05:59.425726   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.425739   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:05:59.425795   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:05:59.454519   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.454531   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:05:59.454538   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:05:59.454547   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:05:59.466217   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:05:59.466232   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:05:59.517449   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:05:59.517462   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:05:59.517469   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:05:59.530200   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:05:59.530214   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:01.585281   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055020437s)
	I0602 11:06:01.585394   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:01.585402   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:04.133605   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:04.221004   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:04.251619   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.251631   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:04.251691   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:04.292078   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.292092   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:04.292154   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:04.339824   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.339842   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:04.339915   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:04.377243   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.377271   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:04.377353   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:04.408245   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.408257   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:04.408326   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:04.441761   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.441772   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:04.441834   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:04.471465   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.471482   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:04.471551   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:04.507089   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.507101   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:04.507107   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:04.507115   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:04.522059   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:04.522082   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:04.592918   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:04.592943   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:04.592954   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:04.609191   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:04.609209   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:06.668244   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058988448s)
	I0602 11:06:06.668353   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:06.668361   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:09.209694   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:09.719071   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:09.751868   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.751881   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:09.751941   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:09.782377   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.782387   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:09.782461   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:09.812852   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.812866   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:09.812927   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:09.841271   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.841287   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:09.841355   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:09.869322   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.869337   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:09.869404   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:09.904831   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.904845   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:09.904924   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:09.935441   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.935452   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:09.935513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:09.971502   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.971513   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:09.971520   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:09.971526   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:09.984595   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:09.984608   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:12.040057   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055401538s)
	I0602 11:06:12.040168   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:12.040175   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:12.084908   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:12.084928   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:12.099657   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:12.099674   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:12.176399   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:14.677774   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:14.719277   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:14.749284   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.749296   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:14.749352   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:14.779602   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.779617   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:14.779692   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:14.810304   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.810315   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:14.810375   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:14.840825   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.840837   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:14.840895   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:14.871176   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.871189   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:14.871245   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:14.899620   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.899632   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:14.899690   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:14.928084   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.928098   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:14.928152   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:14.958074   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.958086   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:14.958093   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:14.958100   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:14.998133   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:14.998148   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:15.010030   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:15.010044   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:15.062993   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:15.063012   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:15.063020   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:15.074991   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:15.075002   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:17.150624   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.075573411s)
	I0602 11:06:19.651352   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:19.721185   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:19.753754   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.753767   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:19.753824   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:19.785309   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.785320   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:19.785375   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:19.815519   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.815532   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:19.815592   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:19.844388   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.844403   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:19.844460   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:19.874394   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.874405   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:19.874463   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:19.903563   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.903575   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:19.903636   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:19.932385   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.932397   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:19.932455   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:19.961585   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.961597   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:19.961604   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:19.961611   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:20.002244   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:20.002257   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:20.014432   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:20.014446   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:20.076253   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:20.076266   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:20.076274   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:20.088518   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:20.088530   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:22.145216   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056638404s)
	I0602 11:06:24.646167   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:24.719768   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:24.751355   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.751366   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:24.751429   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:24.782962   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.782973   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:24.783035   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:24.813990   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.814003   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:24.814058   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:24.848961   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.848974   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:24.849032   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:24.878730   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.878742   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:24.878798   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:24.906982   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.906994   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:24.907050   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:24.938955   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.938968   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:24.939036   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:24.970095   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.970109   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:24.970122   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:24.970131   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:25.015415   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:25.015429   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:25.027601   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:25.027615   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:25.079664   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:25.079676   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:25.079685   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:25.091626   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:25.091642   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:27.149516   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057826153s)
	I0602 11:06:29.650792   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:29.721431   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:29.752590   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.752602   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:29.752682   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:29.781730   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.781745   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:29.781812   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:29.811830   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.811842   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:29.811899   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:29.844830   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.844842   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:29.844906   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:29.874059   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.874074   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:29.874138   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:29.903122   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.903134   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:29.903203   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:29.931909   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.931920   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:29.931981   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:29.959768   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.959780   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:29.959787   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:29.959793   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:29.971640   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:29.971654   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:32.025610   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053903096s)
	I0602 11:06:32.025734   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:32.025742   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:32.066635   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:32.066655   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:32.078867   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:32.078880   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:32.133725   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:34.634284   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:34.721701   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:34.751984   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.751995   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:34.752050   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:34.779859   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.779872   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:34.779929   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:34.809891   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.809902   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:34.809967   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:34.838099   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.838111   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:34.838170   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:34.866657   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.866673   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:34.866736   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:34.895965   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.895980   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:34.896037   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:34.924358   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.924371   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:34.924427   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:34.954617   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.954628   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:34.954635   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:34.954646   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:34.992693   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:34.992705   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:35.005024   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:35.005041   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:35.061106   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:35.061116   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:35.061122   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:35.073095   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:35.073107   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:37.128746   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055582995s)
	I0602 11:06:39.629638   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:39.719744   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:39.751161   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.751172   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:39.751233   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:39.780249   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.780261   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:39.780319   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:39.809191   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.809204   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:39.809259   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:39.837277   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.837288   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:39.837354   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:39.865911   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.865922   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:39.865977   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:39.894428   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.894440   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:39.894508   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:39.923609   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.923621   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:39.923681   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:39.952594   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.952606   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:39.952613   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:39.952631   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:42.012619   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059940213s)
	I0602 11:06:42.012752   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:42.012763   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:42.051824   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:42.051860   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:42.064028   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:42.064044   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:42.116407   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:42.116419   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:42.116429   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:44.630691   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:44.720202   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:44.753527   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.753540   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:44.753594   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:44.783807   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.783820   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:44.783877   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:44.815087   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.815101   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:44.815157   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:44.855143   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.855157   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:44.855211   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:44.884114   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.884126   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:44.884184   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:44.912516   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.912529   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:44.912586   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:44.942078   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.942090   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:44.942144   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:44.973360   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.973371   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:44.973378   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:44.973384   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:45.013557   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:45.013572   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:45.024888   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:45.024900   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:45.077791   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:45.077807   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:45.077815   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:45.089614   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:45.089626   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:47.143631   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053956953s)
	I0602 11:06:49.645446   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:49.720524   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:49.751916   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.751928   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:49.751985   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:49.781581   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.781593   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:49.781650   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:49.811063   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.811076   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:49.811131   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:49.839799   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.839812   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:49.839870   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:49.868670   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.868683   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:49.868741   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:49.897111   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.897125   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:49.897187   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:49.926696   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.926708   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:49.926765   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:49.955084   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.955097   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:49.955103   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:49.955110   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:50.010000   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:50.010012   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:50.010021   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:50.022044   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:50.022057   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:52.079829   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057724742s)
	I0602 11:06:52.079935   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:52.079942   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:52.119564   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:52.119577   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:54.633352   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:54.721975   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:54.753327   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.753339   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:54.753394   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:54.782146   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.782158   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:54.782214   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:54.810970   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.810983   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:54.811029   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:54.842645   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.842665   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:54.842725   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:54.871490   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.871502   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:54.871556   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:54.900472   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.900483   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:54.900541   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:54.929112   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.929124   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:54.929182   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:54.958837   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.958849   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:54.958857   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:54.958866   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:54.998335   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:54.998348   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:55.009734   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:55.009746   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:55.062791   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:55.062801   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:55.062808   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:55.074548   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:55.074559   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:57.132240   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057634309s)
	I0602 11:06:59.633858   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:59.720436   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:59.752920   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.752935   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:59.752993   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:59.784345   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.784360   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:59.784424   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:59.814781   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.814794   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:59.814853   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:59.850880   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.850892   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:59.850948   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:59.880523   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.880539   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:59.880600   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:59.910968   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.910980   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:59.911060   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:59.946727   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.946740   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:59.946803   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:59.981179   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.981189   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:59.981196   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:59.981202   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:59.994847   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:59.994861   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:02.050458   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055549548s)
	I0602 11:07:02.050580   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:02.050589   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:02.101229   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:02.101244   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:02.112793   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:02.112805   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:02.163947   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:04.666274   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:04.721369   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:04.751578   13778 logs.go:274] 0 containers: []
	W0602 11:07:04.751590   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:04.751645   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:04.781089   13778 logs.go:274] 0 containers: []
	W0602 11:07:04.781103   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:04.781167   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:04.813732   13778 logs.go:274] 0 containers: []
	W0602 11:07:04.813752   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:04.813815   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:04.843941   13778 logs.go:274] 0 containers: []
	W0602 11:07:04.843953   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:04.844009   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:04.872954   13778 logs.go:274] 0 containers: []
	W0602 11:07:04.872965   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:04.873021   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:04.905091   13778 logs.go:274] 0 containers: []
	W0602 11:07:04.905104   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:04.905166   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:04.936359   13778 logs.go:274] 0 containers: []
	W0602 11:07:04.936370   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:04.936428   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:04.969406   13778 logs.go:274] 0 containers: []
	W0602 11:07:04.969427   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:04.969436   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:04.969443   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:05.026871   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:05.026881   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:05.026888   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:05.041907   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:05.041919   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:07.096953   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054985855s)
	I0602 11:07:07.097062   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:07.097069   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:07.137273   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:07.137287   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:09.649298   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:09.720716   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:09.752563   13778 logs.go:274] 0 containers: []
	W0602 11:07:09.752604   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:09.752670   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:09.786025   13778 logs.go:274] 0 containers: []
	W0602 11:07:09.786038   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:09.786105   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:09.822144   13778 logs.go:274] 0 containers: []
	W0602 11:07:09.822156   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:09.822221   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:09.853092   13778 logs.go:274] 0 containers: []
	W0602 11:07:09.853108   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:09.853175   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:09.884062   13778 logs.go:274] 0 containers: []
	W0602 11:07:09.884076   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:09.884142   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:09.913785   13778 logs.go:274] 0 containers: []
	W0602 11:07:09.913799   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:09.913868   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:09.944173   13778 logs.go:274] 0 containers: []
	W0602 11:07:09.944188   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:09.944245   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:09.975879   13778 logs.go:274] 0 containers: []
	W0602 11:07:09.975893   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:09.975900   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:09.975906   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:10.018732   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:10.018751   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:10.031899   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:10.031919   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:10.084458   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:10.084469   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:10.084477   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:10.098261   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:10.098277   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:12.154673   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056348608s)
	I0602 11:07:14.654931   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:14.720895   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:14.760517   13778 logs.go:274] 0 containers: []
	W0602 11:07:14.760533   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:14.760627   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:14.796720   13778 logs.go:274] 0 containers: []
	W0602 11:07:14.796739   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:14.796800   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:14.837289   13778 logs.go:274] 0 containers: []
	W0602 11:07:14.837301   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:14.837363   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:14.872521   13778 logs.go:274] 0 containers: []
	W0602 11:07:14.872533   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:14.872601   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:14.906564   13778 logs.go:274] 0 containers: []
	W0602 11:07:14.906577   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:14.906635   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:14.939629   13778 logs.go:274] 0 containers: []
	W0602 11:07:14.939644   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:14.939719   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:14.978179   13778 logs.go:274] 0 containers: []
	W0602 11:07:14.978191   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:14.978249   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:15.013702   13778 logs.go:274] 0 containers: []
	W0602 11:07:15.013722   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:15.013734   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:15.013748   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:15.107928   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:15.107941   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:15.107949   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:15.122900   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:15.122913   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:17.180015   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057053144s)
	I0602 11:07:17.180134   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:17.180144   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:17.220417   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:17.220432   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:19.732561   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:20.220342   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:20.266227   13778 logs.go:274] 0 containers: []
	W0602 11:07:20.266241   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:20.266298   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:20.304773   13778 logs.go:274] 0 containers: []
	W0602 11:07:20.304793   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:20.304865   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:20.335824   13778 logs.go:274] 0 containers: []
	W0602 11:07:20.335837   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:20.335912   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:20.368140   13778 logs.go:274] 0 containers: []
	W0602 11:07:20.368153   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:20.368212   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:20.399005   13778 logs.go:274] 0 containers: []
	W0602 11:07:20.399016   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:20.399078   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:20.427206   13778 logs.go:274] 0 containers: []
	W0602 11:07:20.427220   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:20.427283   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:20.460175   13778 logs.go:274] 0 containers: []
	W0602 11:07:20.460187   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:20.460241   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:20.489620   13778 logs.go:274] 0 containers: []
	W0602 11:07:20.489633   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:20.489640   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:20.489648   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:20.502234   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:20.502247   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:22.562343   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060046447s)
	I0602 11:07:22.562453   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:22.562459   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:22.604191   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:22.604205   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:22.615918   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:22.615930   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:22.672055   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:25.172922   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:25.220764   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:25.266612   13778 logs.go:274] 0 containers: []
	W0602 11:07:25.266624   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:25.266679   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:25.296835   13778 logs.go:274] 0 containers: []
	W0602 11:07:25.296847   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:25.296911   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:25.326952   13778 logs.go:274] 0 containers: []
	W0602 11:07:25.326964   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:25.327024   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:25.357104   13778 logs.go:274] 0 containers: []
	W0602 11:07:25.357117   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:25.357177   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:25.388810   13778 logs.go:274] 0 containers: []
	W0602 11:07:25.388822   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:25.388880   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:25.431220   13778 logs.go:274] 0 containers: []
	W0602 11:07:25.431233   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:25.431292   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:25.462653   13778 logs.go:274] 0 containers: []
	W0602 11:07:25.462666   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:25.462725   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:25.495129   13778 logs.go:274] 0 containers: []
	W0602 11:07:25.495143   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:25.495150   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:25.495157   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:25.508526   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:25.508539   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:27.564955   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056367534s)
	I0602 11:07:27.565062   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:27.565070   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:27.604342   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:27.604355   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:27.616408   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:27.616420   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:27.683734   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:30.184404   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:30.221342   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:30.269903   13778 logs.go:274] 0 containers: []
	W0602 11:07:30.269916   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:30.269974   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:30.300599   13778 logs.go:274] 0 containers: []
	W0602 11:07:30.300611   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:30.300670   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:30.331549   13778 logs.go:274] 0 containers: []
	W0602 11:07:30.331564   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:30.331624   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:30.362291   13778 logs.go:274] 0 containers: []
	W0602 11:07:30.362304   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:30.362363   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:30.392491   13778 logs.go:274] 0 containers: []
	W0602 11:07:30.392504   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:30.392565   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:30.421223   13778 logs.go:274] 0 containers: []
	W0602 11:07:30.421236   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:30.421301   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:30.450047   13778 logs.go:274] 0 containers: []
	W0602 11:07:30.450059   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:30.450116   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:30.478350   13778 logs.go:274] 0 containers: []
	W0602 11:07:30.478362   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:30.478369   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:30.478382   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:32.534254   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055823963s)
	I0602 11:07:32.534378   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:32.534387   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:32.574684   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:32.574697   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:32.585983   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:32.585995   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:32.638219   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:32.638229   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:32.638235   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:35.152608   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:35.220509   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:35.266643   13778 logs.go:274] 0 containers: []
	W0602 11:07:35.266655   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:35.266711   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:35.298042   13778 logs.go:274] 0 containers: []
	W0602 11:07:35.298054   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:35.298112   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:35.328684   13778 logs.go:274] 0 containers: []
	W0602 11:07:35.328696   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:35.328761   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:35.358186   13778 logs.go:274] 0 containers: []
	W0602 11:07:35.358198   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:35.358257   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:35.387404   13778 logs.go:274] 0 containers: []
	W0602 11:07:35.387424   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:35.387485   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:35.415804   13778 logs.go:274] 0 containers: []
	W0602 11:07:35.415816   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:35.415870   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:35.444719   13778 logs.go:274] 0 containers: []
	W0602 11:07:35.444733   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:35.444788   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:35.473270   13778 logs.go:274] 0 containers: []
	W0602 11:07:35.473282   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:35.473288   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:35.473295   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:35.485309   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:35.485322   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:37.541841   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056471215s)
	I0602 11:07:37.541967   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:37.541975   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:37.585247   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:37.585266   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:37.596727   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:37.596739   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:37.648688   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:40.150980   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:40.221775   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:40.271037   13778 logs.go:274] 0 containers: []
	W0602 11:07:40.271048   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:40.271107   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:40.300307   13778 logs.go:274] 0 containers: []
	W0602 11:07:40.300320   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:40.300382   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:40.329741   13778 logs.go:274] 0 containers: []
	W0602 11:07:40.329752   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:40.329809   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:40.359334   13778 logs.go:274] 0 containers: []
	W0602 11:07:40.359347   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:40.359404   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:40.387328   13778 logs.go:274] 0 containers: []
	W0602 11:07:40.387340   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:40.387398   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:40.415829   13778 logs.go:274] 0 containers: []
	W0602 11:07:40.415841   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:40.415896   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:40.445356   13778 logs.go:274] 0 containers: []
	W0602 11:07:40.445368   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:40.445426   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:40.475264   13778 logs.go:274] 0 containers: []
	W0602 11:07:40.475276   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:40.475283   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:40.475291   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:40.515152   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:40.515169   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:40.527850   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:40.527863   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:40.588998   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:40.589012   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:40.589019   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:40.601087   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:40.601098   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:42.651120   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04997538s)
	I0602 11:07:45.151676   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:45.220749   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:45.275993   13778 logs.go:274] 0 containers: []
	W0602 11:07:45.276005   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:45.276062   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:45.304196   13778 logs.go:274] 0 containers: []
	W0602 11:07:45.304209   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:45.304265   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:45.333780   13778 logs.go:274] 0 containers: []
	W0602 11:07:45.333792   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:45.333849   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:45.362357   13778 logs.go:274] 0 containers: []
	W0602 11:07:45.362369   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:45.362434   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:45.392964   13778 logs.go:274] 0 containers: []
	W0602 11:07:45.392976   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:45.393034   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:45.422778   13778 logs.go:274] 0 containers: []
	W0602 11:07:45.422790   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:45.422850   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:45.451449   13778 logs.go:274] 0 containers: []
	W0602 11:07:45.451462   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:45.451515   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:45.479994   13778 logs.go:274] 0 containers: []
	W0602 11:07:45.480006   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:45.480013   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:45.480021   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:45.538467   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:45.538479   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:45.538488   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:45.552004   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:45.552016   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:47.623351   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.071283264s)
	I0602 11:07:47.623480   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:47.623489   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:47.669721   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:47.669735   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:50.182746   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:50.221068   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:50.274945   13778 logs.go:274] 0 containers: []
	W0602 11:07:50.274957   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:50.275025   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:50.305130   13778 logs.go:274] 0 containers: []
	W0602 11:07:50.305143   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:50.305201   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:50.333529   13778 logs.go:274] 0 containers: []
	W0602 11:07:50.333540   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:50.333597   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:50.361412   13778 logs.go:274] 0 containers: []
	W0602 11:07:50.361425   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:50.361481   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:50.389624   13778 logs.go:274] 0 containers: []
	W0602 11:07:50.389637   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:50.389692   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:50.419267   13778 logs.go:274] 0 containers: []
	W0602 11:07:50.419279   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:50.419332   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:50.449700   13778 logs.go:274] 0 containers: []
	W0602 11:07:50.449712   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:50.449771   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:50.477050   13778 logs.go:274] 0 containers: []
	W0602 11:07:50.477064   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:50.477073   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:50.477079   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:50.517484   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:50.517497   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:50.529154   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:50.529168   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:50.582080   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:07:50.582091   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:50.582098   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:50.594502   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:50.594514   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:52.647620   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05305884s)
	I0602 11:07:55.148632   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:07:55.221680   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:07:55.271306   13778 logs.go:274] 0 containers: []
	W0602 11:07:55.271319   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:07:55.271373   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:07:55.299674   13778 logs.go:274] 0 containers: []
	W0602 11:07:55.299685   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:07:55.299742   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:07:55.328239   13778 logs.go:274] 0 containers: []
	W0602 11:07:55.328252   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:07:55.328308   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:07:55.357076   13778 logs.go:274] 0 containers: []
	W0602 11:07:55.357087   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:07:55.357168   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:07:55.385344   13778 logs.go:274] 0 containers: []
	W0602 11:07:55.385356   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:07:55.385423   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:07:55.416309   13778 logs.go:274] 0 containers: []
	W0602 11:07:55.416321   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:07:55.416378   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:07:55.445731   13778 logs.go:274] 0 containers: []
	W0602 11:07:55.445746   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:07:55.445805   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:07:55.474741   13778 logs.go:274] 0 containers: []
	W0602 11:07:55.474756   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:07:55.474768   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:07:55.474777   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:07:55.487710   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:07:55.487725   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:07:57.540977   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053202296s)
	I0602 11:07:57.541081   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:07:57.541088   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:07:57.591097   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:07:57.591114   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:07:57.604019   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:07:57.604033   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:07:57.657177   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:00.157958   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:00.221171   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:00.275191   13778 logs.go:274] 0 containers: []
	W0602 11:08:00.275202   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:00.275257   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:00.304341   13778 logs.go:274] 0 containers: []
	W0602 11:08:00.304352   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:00.304408   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:00.333833   13778 logs.go:274] 0 containers: []
	W0602 11:08:00.333844   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:00.333902   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:00.362537   13778 logs.go:274] 0 containers: []
	W0602 11:08:00.362549   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:00.362607   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:00.394102   13778 logs.go:274] 0 containers: []
	W0602 11:08:00.394117   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:00.394175   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:00.422151   13778 logs.go:274] 0 containers: []
	W0602 11:08:00.422162   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:00.422216   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:00.451112   13778 logs.go:274] 0 containers: []
	W0602 11:08:00.451123   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:00.451176   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:00.479422   13778 logs.go:274] 0 containers: []
	W0602 11:08:00.479433   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:00.479440   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:00.479448   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:02.531273   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051777446s)
	I0602 11:08:02.531378   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:02.531385   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:02.574113   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:02.574128   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:02.587235   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:02.587250   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:02.642786   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:02.642796   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:02.642803   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:05.156996   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:05.221133   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:05.269685   13778 logs.go:274] 0 containers: []
	W0602 11:08:05.269698   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:05.269754   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:05.298304   13778 logs.go:274] 0 containers: []
	W0602 11:08:05.298318   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:05.298376   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:05.327210   13778 logs.go:274] 0 containers: []
	W0602 11:08:05.327222   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:05.327277   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:05.356433   13778 logs.go:274] 0 containers: []
	W0602 11:08:05.356446   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:05.356520   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:05.385548   13778 logs.go:274] 0 containers: []
	W0602 11:08:05.385559   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:05.385615   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:05.414226   13778 logs.go:274] 0 containers: []
	W0602 11:08:05.414237   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:05.414293   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:05.443568   13778 logs.go:274] 0 containers: []
	W0602 11:08:05.443580   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:05.443636   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:05.471485   13778 logs.go:274] 0 containers: []
	W0602 11:08:05.471497   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:05.471504   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:05.471511   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:05.483235   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:05.483250   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:05.535710   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:05.535721   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:05.535728   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:05.547539   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:05.547551   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:07.603346   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05574736s)
	I0602 11:08:07.603455   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:07.603461   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:10.145868   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:10.222163   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:10.270636   13778 logs.go:274] 0 containers: []
	W0602 11:08:10.270647   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:10.270703   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:10.300726   13778 logs.go:274] 0 containers: []
	W0602 11:08:10.300739   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:10.300796   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:10.330008   13778 logs.go:274] 0 containers: []
	W0602 11:08:10.330021   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:10.330076   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:10.358303   13778 logs.go:274] 0 containers: []
	W0602 11:08:10.358316   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:10.358370   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:10.386777   13778 logs.go:274] 0 containers: []
	W0602 11:08:10.386789   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:10.386846   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:10.415984   13778 logs.go:274] 0 containers: []
	W0602 11:08:10.415995   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:10.416052   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:10.446955   13778 logs.go:274] 0 containers: []
	W0602 11:08:10.446967   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:10.447026   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:10.475654   13778 logs.go:274] 0 containers: []
	W0602 11:08:10.475667   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:10.475675   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:10.475683   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:10.516969   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:10.516989   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:10.529248   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:10.529261   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:10.581386   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:10.581396   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:10.581404   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:10.593673   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:10.593683   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:12.647870   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054140336s)
	I0602 11:08:15.148971   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:15.221956   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:15.271807   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.271819   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:15.271873   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:15.303439   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.303452   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:15.303518   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:15.333961   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.333988   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:15.334084   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:15.364875   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.364888   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:15.364950   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:15.395700   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.395712   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:15.395765   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:15.424510   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.424520   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:15.424572   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:15.453415   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.453428   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:15.453493   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:15.483708   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.483719   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:15.483724   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:15.483730   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:15.538743   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:15.538752   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:15.538758   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:15.550783   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:15.550794   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:17.605845   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055003078s)
	I0602 11:08:17.605979   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:17.605988   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:17.649331   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:17.649353   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:20.164014   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:20.221322   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:20.272710   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.272723   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:20.272780   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:20.303113   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.303125   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:20.303179   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:20.332713   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.332726   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:20.332786   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:20.363526   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.363541   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:20.363604   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:20.393277   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.393290   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:20.393345   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:20.423123   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.423136   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:20.423189   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:20.452818   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.452831   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:20.452894   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:20.482672   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.482685   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:20.482691   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:20.482699   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:20.537779   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:20.537790   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:20.537797   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:20.551744   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:20.551756   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:22.603781   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051975725s)
	I0602 11:08:22.603889   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:22.603895   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:22.641201   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:22.641214   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:25.154798   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:25.221414   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:25.296178   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.296191   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:25.296260   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:25.329053   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.329071   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:25.329164   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:25.357741   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.357752   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:25.357810   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:25.390667   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.390682   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:25.390741   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:25.437576   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.437588   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:25.437644   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:25.466359   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.466375   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:25.466456   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:25.502948   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.502962   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:25.503019   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:25.538129   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.538146   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:25.538154   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:25.538162   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:25.582011   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:25.582029   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:25.595600   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:25.595615   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:25.652328   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:25.652345   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:25.652351   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:25.665370   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:25.665381   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:27.726129   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060700298s)
	I0602 11:08:30.226574   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:30.721539   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:30.759508   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.759521   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:30.759579   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:30.792623   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.792637   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:30.792712   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:30.822014   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.822028   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:30.822086   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:30.851154   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.851168   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:30.851240   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:30.880918   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.880931   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:30.880986   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:30.910502   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.910515   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:30.910577   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:30.941645   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.941657   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:30.941714   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:30.972909   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.972921   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:30.972928   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:30.972934   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:30.984875   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:30.984888   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:31.040921   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:31.040935   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:31.040942   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:31.053333   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:31.053346   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:33.107850   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05445655s)
	I0602 11:08:33.107952   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:33.107959   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:35.650135   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:35.721787   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:35.751661   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.751673   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:35.751730   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:35.780322   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.780334   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:35.780393   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:35.809983   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.809996   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:35.810052   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:35.838069   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.838081   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:35.838140   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:35.866612   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.866629   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:35.866713   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:35.897341   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.897354   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:35.897409   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:35.928444   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.928456   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:35.928513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:35.956497   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.956510   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:35.956517   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:35.956524   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:35.969093   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:35.969108   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:38.024274   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055118179s)
	I0602 11:08:38.024385   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:38.024393   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:38.064021   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:38.064037   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:38.075931   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:38.075944   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:38.130990   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:40.632494   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:40.722073   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:40.750220   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.750232   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:40.750297   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:40.778245   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.778256   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:40.778304   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:40.807262   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.807273   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:40.807333   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:40.836172   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.836183   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:40.836239   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:40.864838   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.864850   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:40.864906   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:40.893840   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.893852   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:40.893910   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:40.923704   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.923715   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:40.923773   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:40.951957   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.951970   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:40.951978   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:40.951986   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:41.004848   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:41.004859   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:41.004865   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:41.017334   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:41.017346   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:43.066770   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0493766s)
	I0602 11:08:43.066886   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:43.066894   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:43.107798   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:43.107814   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:45.621045   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:45.722513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:45.753852   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.753863   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:45.753920   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:45.782032   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.782044   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:45.782103   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:45.811660   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.811672   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:45.811730   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:45.841102   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.841115   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:45.841176   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:45.869555   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.869568   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:45.869625   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:45.896999   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.897011   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:45.897079   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:45.925033   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.925045   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:45.925100   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:45.955532   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.955543   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:45.955550   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:45.955556   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:45.994815   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:45.994828   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:46.006706   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:46.006718   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:46.059309   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:46.059318   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:46.059325   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:46.071706   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:46.071719   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:48.125554   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053788045s)
	I0602 11:08:50.627972   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:50.722301   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:50.752680   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.752693   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:50.752749   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:50.781019   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.781032   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:50.781090   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:50.810077   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.810088   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:50.810152   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:50.839097   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.839108   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:50.839164   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:50.870493   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.870504   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:50.870560   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:50.899156   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.899168   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:50.899224   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:50.927401   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.927413   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:50.927469   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:50.970889   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.970901   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:50.970908   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:50.970915   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:51.026070   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:51.026080   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:51.026086   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:51.037940   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:51.037952   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:53.091015   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053015843s)
	I0602 11:08:53.091123   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:53.091130   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:53.130767   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:53.130781   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:55.642775   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:55.722143   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:55.752596   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.752608   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:55.752663   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:55.781383   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.781395   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:55.781453   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:55.810740   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.810751   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:55.810806   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:55.839025   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.839037   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:55.839092   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:55.868111   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.868123   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:55.868185   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:55.896365   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.896376   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:55.896436   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:55.925240   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.925252   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:55.925308   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:55.954351   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.954362   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:55.954370   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:55.954377   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:55.994349   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:55.994360   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:56.006541   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:56.006553   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:56.060230   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:56.060240   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:56.060246   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:56.072372   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:56.072385   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:58.126471   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054039162s)
	I0602 11:09:00.626897   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:09:00.636995   13778 kubeadm.go:630] restartCluster took 4m5.698955011s
	W0602 11:09:00.637074   13778 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0602 11:09:00.637089   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:09:01.056935   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:09:01.066336   13778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:09:01.073784   13778 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:09:01.073830   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:09:01.081072   13778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:09:01.081099   13778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:09:01.817978   13778 out.go:204]   - Generating certificates and keys ...
	I0602 11:09:02.504280   13778 out.go:204]   - Booting up control plane ...
	W0602 11:10:57.423207   13778 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0602 11:10:57.423236   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:10:57.840204   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:10:57.849925   13778 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:10:57.849972   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:10:57.857794   13778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:10:57.857811   13778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:10:58.606461   13778 out.go:204]   - Generating certificates and keys ...
	I0602 11:10:59.124567   13778 out.go:204]   - Booting up control plane ...
	I0602 11:12:54.041678   13778 kubeadm.go:397] StartCluster complete in 7m59.136004493s
	I0602 11:12:54.041759   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:12:54.071372   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.071384   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:12:54.071441   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:12:54.100053   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.100066   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:12:54.100125   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:12:54.128275   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.128286   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:12:54.128343   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:12:54.157653   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.157665   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:12:54.157722   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:12:54.187430   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.187443   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:12:54.187496   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:12:54.215461   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.215472   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:12:54.215526   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:12:54.244945   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.244956   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:12:54.245011   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:12:54.274697   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.274709   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:12:54.274716   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:12:54.274725   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:12:54.287581   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:12:54.287595   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:12:56.340056   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052413965s)
	I0602 11:12:56.340164   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:12:56.340171   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:12:56.380800   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:12:56.380813   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:12:56.392375   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:12:56.392386   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:12:56.445060   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0602 11:12:56.445088   13778 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0602 11:12:56.445103   13778 out.go:239] * 
	* 
	W0602 11:12:56.445207   13778 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:12:56.445222   13778 out.go:239] * 
	* 
	W0602 11:12:56.445819   13778 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 11:12:56.530257   13778 out.go:177] 
	W0602 11:12:56.572600   13778 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:12:56.572701   13778 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0602 11:12:56.572743   13778 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0602 11:12:56.593452   13778 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220602105906-2113 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220602105906-2113
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220602105906-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07",
	        "Created": "2022-06-02T17:59:12.760386506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:04:51.572935922Z",
	            "FinishedAt": "2022-06-02T18:04:48.684748032Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hostname",
	        "HostsPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hosts",
	        "LogPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07-json.log",
	        "Name": "/old-k8s-version-20220602105906-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220602105906-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220602105906-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220602105906-2113",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220602105906-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220602105906-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77d71d4d8d15408927c38bc69753733fb245f90b6786c7b56828647b3b4389d6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52179"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52181"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/77d71d4d8d15",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220602105906-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61b85e98188b",
	                        "old-k8s-version-20220602105906-2113"
	                    ],
	                    "NetworkID": "fefb74a76593392c8406a972f20a5745c2403bb46ee6809bd1a18584d4cbeee4",
	                    "EndpointID": "3cd2312efe3d60be38aeb6608533eff057e701e91a3e65f1ab1e73ec94a72df1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 2 (449.392797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220602105906-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220602105906-2113 logs -n 25: (3.495239219s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                                | enable-default-cni-20220602104455-2113         | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	|         | enable-default-cni-20220602104455-2113            |                                                |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220602104455-2113         | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	|         | enable-default-cni-20220602104455-2113            |                                                |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                                |         |                |                     |                     |
	| start   | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113                    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:59 PDT |
	|         | --memory=2048                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	| ssh     | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113                    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | pgrep -a kubelet                                  |                                                |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220602104455-2113         | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | enable-default-cni-20220602104455-2113            |                                                |         |                |                     |                     |
	| delete  | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113                    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	| delete  | -p                                                | disable-driver-mounts-20220602105918-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | disable-driver-mounts-20220602105918-2113         |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | no-preload-20220602105919-2113                    | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220602105919-2113                    | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:08:15
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:08:15.517716   14271 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:08:15.517914   14271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:08:15.517920   14271 out.go:309] Setting ErrFile to fd 2...
	I0602 11:08:15.517924   14271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:08:15.518039   14271 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:08:15.518296   14271 out.go:303] Setting JSON to false
	I0602 11:08:15.533877   14271 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4064,"bootTime":1654189231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:08:15.534006   14271 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:08:15.555791   14271 out.go:177] * [default-k8s-different-port-20220602110711-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:08:15.597880   14271 notify.go:193] Checking for updates...
	I0602 11:08:15.619617   14271 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:08:15.640808   14271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:08:15.661783   14271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:08:15.682595   14271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:08:15.703785   14271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:08:15.725094   14271 config.go:178] Loaded profile config "default-k8s-different-port-20220602110711-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:08:15.725430   14271 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:08:15.796906   14271 docker.go:137] docker version: linux-20.10.14
	I0602 11:08:15.797053   14271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:08:15.922561   14271 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:08:15.86746037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:08:15.966236   14271 out.go:177] * Using the docker driver based on existing profile
	I0602 11:08:15.988390   14271 start.go:284] selected driver: docker
	I0602 11:08:15.988424   14271 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220602110711-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220602110711-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:08:15.988564   14271 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:08:15.991998   14271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:08:16.114994   14271 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:08:16.062502247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:08:16.115182   14271 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:08:16.115205   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:08:16.115214   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:08:16.115223   14271 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220602110711-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602110711-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:08:16.158958   14271 out.go:177] * Starting control plane node default-k8s-different-port-20220602110711-2113 in cluster default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.181099   14271 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:08:16.202847   14271 out.go:177] * Pulling base image ...
	I0602 11:08:16.244857   14271 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:08:16.244892   14271 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:08:16.244926   14271 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 11:08:16.244951   14271 cache.go:57] Caching tarball of preloaded images
	I0602 11:08:16.245139   14271 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:08:16.245160   14271 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 11:08:16.246083   14271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/config.json ...
	I0602 11:08:16.310676   14271 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:08:16.310691   14271 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:08:16.310699   14271 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:08:16.310742   14271 start.go:352] acquiring machines lock for default-k8s-different-port-20220602110711-2113: {Name:mk5c32f64296c6672223bdc5496081160863f257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:08:16.310822   14271 start.go:356] acquired machines lock for "default-k8s-different-port-20220602110711-2113" in 60.649µs
	I0602 11:08:16.310842   14271 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:08:16.310853   14271 fix.go:55] fixHost starting: 
	I0602 11:08:16.311066   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:08:16.377507   14271 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220602110711-2113: state=Stopped err=<nil>
	W0602 11:08:16.377551   14271 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:08:16.399302   14271 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220602110711-2113" ...
	I0602 11:08:16.420479   14271 cli_runner.go:164] Run: docker start default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.774466   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:08:16.847223   14271 kic.go:416] container "default-k8s-different-port-20220602110711-2113" state is running.
	I0602 11:08:16.847828   14271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.920874   14271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/config.json ...
	I0602 11:08:16.921257   14271 machine.go:88] provisioning docker machine ...
	I0602 11:08:16.921280   14271 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220602110711-2113"
	I0602 11:08:16.921351   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.993938   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:16.994122   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:16.994150   14271 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220602110711-2113 && echo "default-k8s-different-port-20220602110711-2113" | sudo tee /etc/hostname
	I0602 11:08:17.119677   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220602110711-2113
	
	I0602 11:08:17.119769   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.193462   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:17.193625   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:17.193641   14271 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220602110711-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220602110711-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220602110711-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:08:17.313470   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:08:17.313494   14271 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:08:17.313514   14271 ubuntu.go:177] setting up certificates
	I0602 11:08:17.313526   14271 provision.go:83] configureAuth start
	I0602 11:08:17.313600   14271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.386535   14271 provision.go:138] copyHostCerts
	I0602 11:08:17.386632   14271 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:08:17.386642   14271 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:08:17.386747   14271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:08:17.386997   14271 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:08:17.387004   14271 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:08:17.387064   14271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:08:17.387225   14271 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:08:17.387231   14271 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:08:17.387292   14271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:08:17.387411   14271 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220602110711-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220602110711-2113]
	I0602 11:08:17.434515   14271 provision.go:172] copyRemoteCerts
	I0602 11:08:17.434580   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:08:17.434625   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.506502   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:17.593925   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:08:17.614967   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0602 11:08:17.637005   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:08:17.658235   14271 provision.go:86] duration metric: configureAuth took 344.691133ms
	I0602 11:08:17.658249   14271 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:08:17.658395   14271 config.go:178] Loaded profile config "default-k8s-different-port-20220602110711-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:08:17.658448   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.730610   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:17.730757   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:17.730766   14271 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:08:17.850560   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:08:17.850583   14271 ubuntu.go:71] root file system type: overlay
	I0602 11:08:17.850750   14271 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:08:17.850832   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.922108   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:17.922253   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:17.922301   14271 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:08:18.046181   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:08:18.046271   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.117615   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:18.117752   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:18.117764   14271 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:08:18.238940   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:08:18.238960   14271 machine.go:91] provisioned docker machine in 1.317671465s
	I0602 11:08:18.238969   14271 start.go:306] post-start starting for "default-k8s-different-port-20220602110711-2113" (driver="docker")
	I0602 11:08:18.238974   14271 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:08:18.239040   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:08:18.239086   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.309021   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.395195   14271 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:08:18.398736   14271 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:08:18.398753   14271 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:08:18.398761   14271 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:08:18.398769   14271 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:08:18.398779   14271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:08:18.398885   14271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:08:18.399033   14271 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:08:18.399193   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:08:18.406089   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:08:18.423802   14271 start.go:309] post-start completed in 184.82013ms
	I0602 11:08:18.423883   14271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:08:18.423931   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.493419   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.577352   14271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:08:18.582028   14271 fix.go:57] fixHost completed within 2.271136565s
	I0602 11:08:18.582039   14271 start.go:81] releasing machines lock for "default-k8s-different-port-20220602110711-2113", held for 2.271170149s
	I0602 11:08:18.582108   14271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.652251   14271 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:08:18.652251   14271 ssh_runner.go:195] Run: systemctl --version
	I0602 11:08:18.652335   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.652339   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.729373   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.731038   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.813622   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:08:18.943560   14271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:08:18.954030   14271 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:08:18.954084   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:08:18.963406   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:08:18.976091   14271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:08:19.040894   14271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:08:19.108714   14271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:08:19.118700   14271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:08:19.185811   14271 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:08:19.195192   14271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:08:19.228635   14271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:08:15.221956   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:15.271807   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.271819   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:15.271873   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:15.303439   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.303452   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:15.303518   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:15.333961   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.333988   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:15.334084   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:15.364875   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.364888   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:15.364950   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:15.395700   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.395712   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:15.395765   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:15.424510   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.424520   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:15.424572   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:15.453415   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.453428   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:15.453493   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:15.483708   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.483719   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:15.483724   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:15.483730   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:15.538743   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:15.538752   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:15.538758   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:15.550783   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:15.550794   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:17.605845   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055003078s)
	I0602 11:08:17.605979   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:17.605988   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:17.649331   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:17.649353   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:20.164014   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:19.305934   14271 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 11:08:19.306113   14271 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220602110711-2113 dig +short host.docker.internal
	I0602 11:08:19.446242   14271 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:08:19.446326   14271 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:08:19.450862   14271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:08:19.460634   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:19.531276   14271 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:08:19.531337   14271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:08:19.561235   14271 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:08:19.561251   14271 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:08:19.561312   14271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:08:19.591189   14271 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:08:19.591211   14271 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:08:19.591282   14271 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:08:19.665013   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:08:19.665024   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:08:19.665044   14271 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:08:19.665056   14271 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220602110711-2113 NodeName:default-k8s-different-port-20220602110711-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 Cgroup
Driver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:08:19.665176   14271 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220602110711-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:08:19.665248   14271 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220602110711-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602110711-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0602 11:08:19.665304   14271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 11:08:19.673262   14271 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:08:19.673322   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:08:19.680190   14271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0602 11:08:19.692477   14271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:08:19.704606   14271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0602 11:08:19.717011   14271 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:08:19.720737   14271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:08:19.730066   14271 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113 for IP: 192.168.58.2
	I0602 11:08:19.730171   14271 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:08:19.730221   14271 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:08:19.730312   14271 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.key
	I0602 11:08:19.730378   14271 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/apiserver.key.cee25041
	I0602 11:08:19.730457   14271 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/proxy-client.key
	I0602 11:08:19.730674   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:08:19.730711   14271 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:08:19.730724   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:08:19.730754   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:08:19.730789   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:08:19.730822   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:08:19.730884   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:08:19.731420   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:08:19.748043   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 11:08:19.764498   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:08:19.781157   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:08:19.797871   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:08:19.814159   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:08:19.830887   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:08:19.848080   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:08:19.865456   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:08:19.881698   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:08:19.898483   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:08:19.914958   14271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:08:19.927686   14271 ssh_runner.go:195] Run: openssl version
	I0602 11:08:19.932835   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:08:19.940543   14271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:08:19.944572   14271 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:08:19.944611   14271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:08:19.949643   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:08:19.956574   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:08:19.964137   14271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:08:19.967898   14271 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:08:19.967937   14271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:08:19.973115   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:08:19.980514   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:08:19.988285   14271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:08:19.991947   14271 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:08:19.991984   14271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:08:19.997046   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:08:20.004017   14271 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220602110711-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602110711-2113
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:08:20.004132   14271 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:08:20.033806   14271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:08:20.041165   14271 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:08:20.041189   14271 kubeadm.go:626] restartCluster start
	I0602 11:08:20.041238   14271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:08:20.047947   14271 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.047999   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:20.119320   14271 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220602110711-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:08:20.119501   14271 kubeconfig.go:127] "default-k8s-different-port-20220602110711-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:08:20.119891   14271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:08:20.121169   14271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:08:20.128818   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.128866   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.140758   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.341344   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.341425   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.350851   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.221322   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:20.272710   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.272723   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:20.272780   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:20.303113   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.303125   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:20.303179   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:20.332713   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.332726   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:20.332786   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:20.363526   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.363541   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:20.363604   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:20.393277   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.393290   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:20.393345   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:20.423123   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.423136   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:20.423189   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:20.452818   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.452831   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:20.452894   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:20.482672   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.482685   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:20.482691   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:20.482699   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:20.537779   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:20.537790   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:20.537797   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:20.551744   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:20.551756   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:22.603781   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051975725s)
	I0602 11:08:22.603889   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:22.603895   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:22.641201   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:22.641214   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:25.154798   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:20.540903   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.541022   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.549461   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.740967   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.741117   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.752173   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.940840   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.940902   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.949819   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.142949   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.143091   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.153503   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.341193   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.341297   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.352208   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.542948   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.543068   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.553688   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.742445   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.742610   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.752897   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.941532   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.941622   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.952125   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.143019   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.143112   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.154053   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.342959   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.343122   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.354067   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.541852   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.541959   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.552227   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.743005   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.743174   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.753673   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.941169   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.941282   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.951571   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.143019   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:23.143121   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:23.154033   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.154043   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:23.154095   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:23.162400   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.162410   14271 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:08:23.162418   14271 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:08:23.162473   14271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:08:23.192549   14271 docker.go:442] Stopping containers: [5424fc41e82a 5f5b0dd7b333 f35280654931 b9a9032aa6a0 5f2b057e31f6 0a04721ed918 e3c1dd0cd3c0 d432e94b8645 553b06952827 41e494ce31b3 947af7b50e63 059f7d232752 d3a03a2fc0b9 bf8a809c5a96 cff10caa9374 680bea8fcf84]
	I0602 11:08:23.192630   14271 ssh_runner.go:195] Run: docker stop 5424fc41e82a 5f5b0dd7b333 f35280654931 b9a9032aa6a0 5f2b057e31f6 0a04721ed918 e3c1dd0cd3c0 d432e94b8645 553b06952827 41e494ce31b3 947af7b50e63 059f7d232752 d3a03a2fc0b9 bf8a809c5a96 cff10caa9374 680bea8fcf84
	I0602 11:08:23.222876   14271 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:08:23.233125   14271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:08:23.240768   14271 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  2 18:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  2 18:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  2 18:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  2 18:07 /etc/kubernetes/scheduler.conf
	
	I0602 11:08:23.240824   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0602 11:08:23.248274   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0602 11:08:23.255564   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0602 11:08:23.263617   14271 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.263680   14271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 11:08:23.270956   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0602 11:08:23.278150   14271 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.278193   14271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 11:08:23.284827   14271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:08:23.292008   14271 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:08:23.292025   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:23.336140   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.189152   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.321146   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.367977   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.415440   14271 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:08:24.415503   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:24.926339   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:25.424317   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:25.476098   14271 api_server.go:71] duration metric: took 1.060645549s to wait for apiserver process to appear ...
	I0602 11:08:25.476124   14271 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:08:25.476138   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:25.477296   14271 api_server.go:256] stopped: https://127.0.0.1:52983/healthz: Get "https://127.0.0.1:52983/healthz": EOF
	I0602 11:08:25.221414   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:25.296178   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.296191   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:25.296260   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:25.329053   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.329071   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:25.329164   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:25.357741   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.357752   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:25.357810   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:25.390667   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.390682   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:25.390741   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:25.437576   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.437588   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:25.437644   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:25.466359   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.466375   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:25.466456   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:25.502948   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.502962   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:25.503019   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:25.538129   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.538146   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:25.538154   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:25.538162   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:25.582011   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:25.582029   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:25.595600   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:25.595615   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:25.652328   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:25.652345   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:25.652351   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:25.665370   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:25.665381   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:27.726129   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060700298s)
	I0602 11:08:25.977412   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:27.865104   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 11:08:27.865120   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 11:08:27.978216   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:27.984906   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:08:27.984929   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:08:28.477488   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:28.484388   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:08:28.484405   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:08:28.977988   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:28.983267   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:08:28.983291   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:08:29.478044   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:29.483906   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 200:
	ok
	I0602 11:08:29.490553   14271 api_server.go:140] control plane version: v1.23.6
	I0602 11:08:29.490564   14271 api_server.go:130] duration metric: took 4.014365072s to wait for apiserver health ...
	I0602 11:08:29.490572   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:08:29.490579   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:08:29.490591   14271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:08:29.498298   14271 system_pods.go:59] 8 kube-system pods found
	I0602 11:08:29.498313   14271 system_pods.go:61] "coredns-64897985d-h47dc" [7accc8c2-babb-4fb2-a915-34bdcaf81942] Running
	I0602 11:08:29.498323   14271 system_pods.go:61] "etcd-default-k8s-different-port-20220602110711-2113" [9a73a84a-8a22-4366-a66d-df315295a7a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 11:08:29.498328   14271 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220602110711-2113" [c11ca282-ae9e-4bb4-9517-d6c8bd9deab8] Running
	I0602 11:08:29.498333   14271 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220602110711-2113" [f8bd0bd0-acca-48d9-8f9f-33abf2cb6de2] Running
	I0602 11:08:29.498337   14271 system_pods.go:61] "kube-proxy-jrk2q" [7fa38b28-1f8b-4ef3-9983-3724a52b8b00] Running
	I0602 11:08:29.498341   14271 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220602110711-2113" [5fa1cd09-e48e-465c-8a2c-fc11ab91bb5d] Running
	I0602 11:08:29.498348   14271 system_pods.go:61] "metrics-server-b955d9d8-lnk7h" [a26e7c1f-21ad-400e-9ea2-7d626d72922d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:08:29.498356   14271 system_pods.go:61] "storage-provisioner" [1e7818f7-f246-4230-bd2a-1013266312d3] Running
	I0602 11:08:29.498361   14271 system_pods.go:74] duration metric: took 7.764866ms to wait for pod list to return data ...
	I0602 11:08:29.498367   14271 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:08:29.501391   14271 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:08:29.501404   14271 node_conditions.go:123] node cpu capacity is 6
	I0602 11:08:29.501415   14271 node_conditions.go:105] duration metric: took 3.043692ms to run NodePressure ...
	I0602 11:08:29.501426   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:29.615914   14271 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0602 11:08:29.619660   14271 kubeadm.go:777] kubelet initialised
	I0602 11:08:29.619670   14271 kubeadm.go:778] duration metric: took 3.743155ms waiting for restarted kubelet to initialise ...
	I0602 11:08:29.619678   14271 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:08:29.624145   14271 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-h47dc" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:29.628299   14271 pod_ready.go:92] pod "coredns-64897985d-h47dc" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:29.628307   14271 pod_ready.go:81] duration metric: took 4.151112ms waiting for pod "coredns-64897985d-h47dc" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:29.628314   14271 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:30.226574   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:30.721539   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:30.759508   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.759521   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:30.759579   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:30.792623   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.792637   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:30.792712   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:30.822014   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.822028   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:30.822086   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:30.851154   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.851168   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:30.851240   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:30.880918   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.880931   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:30.880986   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:30.910502   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.910515   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:30.910577   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:30.941645   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.941657   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:30.941714   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:30.972909   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.972921   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:30.972928   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:30.972934   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:30.984875   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:30.984888   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:31.040921   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:31.040935   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:31.040942   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:31.053333   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:31.053346   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:33.107850   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05445655s)
	I0602 11:08:33.107952   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:33.107959   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:31.641210   14271 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:33.641265   14271 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:35.650135   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:35.721787   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:35.751661   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.751673   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:35.751730   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:35.780322   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.780334   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:35.780393   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:35.809983   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.809996   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:35.810052   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:35.838069   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.838081   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:35.838140   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:35.866612   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.866629   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:35.866713   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:35.897341   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.897354   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:35.897409   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:35.928444   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.928456   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:35.928513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:35.956497   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.956510   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:35.956517   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:35.956524   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:35.969093   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:35.969108   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:38.024274   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055118179s)
	I0602 11:08:38.024385   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:38.024393   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:38.064021   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:38.064037   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:38.075931   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:38.075944   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:38.130990   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:35.642462   14271 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:36.642462   14271 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:36.642475   14271 pod_ready.go:81] duration metric: took 7.014033821s waiting for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:36.642481   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:38.655878   14271 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:40.632494   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:40.722073   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:40.750220   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.750232   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:40.750297   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:40.778245   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.778256   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:40.778304   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:40.807262   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.807273   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:40.807333   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:40.836172   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.836183   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:40.836239   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:40.864838   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.864850   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:40.864906   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:40.893840   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.893852   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:40.893910   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:40.923704   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.923715   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:40.923773   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:40.951957   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.951970   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:40.951978   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:40.951986   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:41.004848   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:41.004859   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:41.004865   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:41.017334   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:41.017346   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:43.066770   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0493766s)
	I0602 11:08:43.066886   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:43.066894   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:43.107798   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:43.107814   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:41.154674   14271 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:43.156222   14271 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:43.156234   14271 pod_ready.go:81] duration metric: took 6.513634404s waiting for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:43.156241   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.668817   14271 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:44.668829   14271 pod_ready.go:81] duration metric: took 1.512556931s waiting for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.668835   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrk2q" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.673173   14271 pod_ready.go:92] pod "kube-proxy-jrk2q" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:44.673180   14271 pod_ready.go:81] duration metric: took 4.340525ms waiting for pod "kube-proxy-jrk2q" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.673186   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.677163   14271 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:44.677170   14271 pod_ready.go:81] duration metric: took 3.980246ms waiting for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.677176   14271 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:45.621045   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:45.722513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:45.753852   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.753863   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:45.753920   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:45.782032   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.782044   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:45.782103   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:45.811660   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.811672   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:45.811730   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:45.841102   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.841115   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:45.841176   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:45.869555   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.869568   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:45.869625   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:45.896999   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.897011   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:45.897079   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:45.925033   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.925045   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:45.925100   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:45.955532   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.955543   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:45.955550   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:45.955556   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:45.994815   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:45.994828   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:46.006706   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:46.006718   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:46.059309   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:46.059318   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:46.059325   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:46.071706   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:46.071719   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:48.125554   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053788045s)
	I0602 11:08:46.690067   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:49.192051   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:50.627972   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:50.722301   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:50.752680   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.752693   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:50.752749   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:50.781019   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.781032   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:50.781090   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:50.810077   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.810088   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:50.810152   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:50.839097   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.839108   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:50.839164   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:50.870493   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.870504   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:50.870560   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:50.899156   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.899168   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:50.899224   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:50.927401   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.927413   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:50.927469   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:50.970889   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.970901   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:50.970908   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:50.970915   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:51.026070   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:51.026080   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:51.026086   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:51.037940   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:51.037952   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:53.091015   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053015843s)
	I0602 11:08:53.091123   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:53.091130   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:53.130767   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:53.130781   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:51.688335   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:53.689175   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:55.642775   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:55.722143   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:55.752596   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.752608   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:55.752663   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:55.781383   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.781395   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:55.781453   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:55.810740   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.810751   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:55.810806   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:55.839025   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.839037   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:55.839092   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:55.868111   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.868123   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:55.868185   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:55.896365   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.896376   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:55.896436   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:55.925240   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.925252   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:55.925308   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:55.954351   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.954362   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:55.954370   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:55.954377   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:55.994349   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:55.994360   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:56.006541   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:56.006553   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:56.060230   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:56.060240   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:56.060246   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:56.072372   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:56.072385   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:58.126471   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054039162s)
	I0602 11:08:56.187836   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:58.190416   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:00.626897   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:09:00.636995   13778 kubeadm.go:630] restartCluster took 4m5.698955011s
	W0602 11:09:00.637074   13778 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0602 11:09:00.637089   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:09:01.056935   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:09:01.066336   13778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:09:01.073784   13778 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:09:01.073830   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:09:01.081072   13778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:09:01.081099   13778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:09:01.817978   13778 out.go:204]   - Generating certificates and keys ...
	I0602 11:09:02.504280   13778 out.go:204]   - Booting up control plane ...
	I0602 11:09:00.687408   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:02.689765   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:04.689850   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:07.189249   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:09.190335   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:11.691237   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:14.187781   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:16.190080   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:18.687798   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:20.690432   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:23.187958   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:25.190427   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:27.687964   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:29.691339   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:32.188132   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:34.189396   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:36.189672   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:38.689846   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:41.188841   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:43.189653   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:45.190339   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:47.690415   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:50.188091   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:52.191824   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:54.690834   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:56.691875   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:59.189437   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:01.190943   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:03.191954   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:05.692452   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:07.692576   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:10.189968   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:12.690983   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:15.188184   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:17.189909   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:19.688905   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:21.691564   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:24.190443   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:26.690498   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:28.691268   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:31.190793   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:33.191155   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:35.690951   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:37.692551   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:40.193163   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:42.691386   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:44.692387   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:46.692685   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:49.193533   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:51.691604   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:53.693237   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	W0602 11:10:57.423207   13778 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0602 11:10:57.423236   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:10:57.840204   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:10:57.849925   13778 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:10:57.849972   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:10:57.857794   13778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:10:57.857811   13778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:10:58.606461   13778 out.go:204]   - Generating certificates and keys ...
	I0602 11:10:59.124567   13778 out.go:204]   - Booting up control plane ...
	I0602 11:10:56.192552   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:58.689473   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:00.693155   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:03.193549   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:05.194270   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:07.693653   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:10.192674   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:12.691715   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:14.691808   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:17.191371   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:19.193132   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:21.193202   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:23.691940   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:25.692807   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:27.692954   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:30.191988   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:32.194025   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:34.692688   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:36.692994   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:38.693797   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:41.193247   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:43.693628   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:45.694558   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:48.191576   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:50.193727   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:52.194036   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:54.194247   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:56.694218   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:59.193493   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:01.194007   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:03.194607   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:05.693468   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:07.693608   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:09.695228   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:12.194703   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:14.693976   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:17.192125   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:19.194163   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:21.194395   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:23.693999   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:26.191617   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:28.194216   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:30.694582   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:33.193720   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:35.694487   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:38.194086   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:40.693116   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:42.693433   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:44.686833   14271 pod_ready.go:81] duration metric: took 4m0.005479685s waiting for pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace to be "Ready" ...
	E0602 11:12:44.686847   14271 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace to be "Ready" (will not retry!)
	I0602 11:12:44.686859   14271 pod_ready.go:38] duration metric: took 4m15.062761979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:12:44.686881   14271 kubeadm.go:630] restartCluster took 4m24.641108189s
	W0602 11:12:44.686956   14271 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0602 11:12:44.686973   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:12:54.041678   13778 kubeadm.go:397] StartCluster complete in 7m59.136004493s
	I0602 11:12:54.041759   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:12:54.071372   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.071384   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:12:54.071441   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:12:54.100053   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.100066   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:12:54.100125   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:12:54.128275   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.128286   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:12:54.128343   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:12:54.157653   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.157665   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:12:54.157722   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:12:54.187430   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.187443   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:12:54.187496   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:12:54.215461   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.215472   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:12:54.215526   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:12:54.244945   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.244956   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:12:54.245011   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:12:54.274697   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.274709   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:12:54.274716   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:12:54.274725   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:12:54.287581   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:12:54.287595   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:12:56.340056   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052413965s)
	I0602 11:12:56.340164   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:12:56.340171   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:12:56.380800   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:12:56.380813   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:12:56.392375   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:12:56.392386   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:12:56.445060   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0602 11:12:56.445088   13778 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0602 11:12:56.445103   13778 out.go:239] * 
	W0602 11:12:56.445207   13778 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:12:56.445222   13778 out.go:239] * 
	W0602 11:12:56.445819   13778 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 11:12:56.530257   13778 out.go:177] 
	W0602 11:12:56.572600   13778 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:12:56.572701   13778 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0602 11:12:56.572743   13778 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0602 11:12:56.593452   13778 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:04:51 UTC, end at Thu 2022-06-02 18:12:58 UTC. --
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 systemd[1]: Starting Docker Application Container Engine...
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.822221462Z" level=info msg="Starting up"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824058418Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824139651Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824195269Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824296574Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825626806Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825660593Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825673330Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825685292Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.830709849Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.834670305Z" level=info msg="Loading containers: start."
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.916131885Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.947713032Z" level=info msg="Loading containers: done."
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.958029440Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.958093467Z" level=info msg="Daemon has completed initialization"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 systemd[1]: Started Docker Application Container Engine.
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.983186383Z" level=info msg="API listen on [::]:2376"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.985769795Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-06-02T18:13:00Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:13:00 up  1:01,  0 users,  load average: 0.56, 0.74, 0.98
	Linux old-k8s-version-20220602105906-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:04:51 UTC, end at Thu 2022-06-02 18:13:00 UTC. --
	Jun 02 18:12:58 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 kubelet[14349]: I0602 18:12:59.568776   14349 server.go:410] Version: v1.16.0
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 kubelet[14349]: I0602 18:12:59.569114   14349 plugins.go:100] No cloud provider specified.
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 kubelet[14349]: I0602 18:12:59.569129   14349 server.go:773] Client rotation is on, will bootstrap in background
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 kubelet[14349]: I0602 18:12:59.571681   14349 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 kubelet[14349]: W0602 18:12:59.572830   14349 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 kubelet[14349]: W0602 18:12:59.572930   14349 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 kubelet[14349]: F0602 18:12:59.572978   14349 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 02 18:12:59 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 kubelet[14362]: I0602 18:13:00.309844   14362 server.go:410] Version: v1.16.0
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 kubelet[14362]: I0602 18:13:00.310221   14362 plugins.go:100] No cloud provider specified.
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 kubelet[14362]: I0602 18:13:00.310277   14362 server.go:773] Client rotation is on, will bootstrap in background
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 kubelet[14362]: I0602 18:13:00.311999   14362 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 kubelet[14362]: W0602 18:13:00.312699   14362 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 kubelet[14362]: W0602 18:13:00.312786   14362 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 kubelet[14362]: F0602 18:13:00.312853   14362 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 02 18:13:00 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 11:13:00.432322   14447 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 2 (465.564634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220602105906-2113" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (491.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (43.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220602105919-2113 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
E0602 11:06:37.958650    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113: exit status 2 (16.111571993s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
E0602 11:06:47.389355    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:06:54.179968    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113: exit status 2 (16.103154154s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220602105919-2113 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220602105919-2113
helpers_test.go:235: (dbg) docker inspect no-preload-20220602105919-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5",
	        "Created": "2022-06-02T17:59:21.591842343Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196542,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:00:34.496508242Z",
	            "FinishedAt": "2022-06-02T18:00:32.564133952Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5/hosts",
	        "LogPath": "/var/lib/docker/containers/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5-json.log",
	        "Name": "/no-preload-20220602105919-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220602105919-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220602105919-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/287ef2386cddf654c60473953eb4d199390f0f320ba1a6f14be9176b6967da70-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/287ef2386cddf654c60473953eb4d199390f0f320ba1a6f14be9176b6967da70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/287ef2386cddf654c60473953eb4d199390f0f320ba1a6f14be9176b6967da70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/287ef2386cddf654c60473953eb4d199390f0f320ba1a6f14be9176b6967da70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220602105919-2113",
	                "Source": "/var/lib/docker/volumes/no-preload-20220602105919-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220602105919-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220602105919-2113",
	                "name.minikube.sigs.k8s.io": "no-preload-20220602105919-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f729207e5b0076152390f1cd3165dbbba90e1ecf3b17b940305e6db29b07c08c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51942"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51939"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51940"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51941"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f729207e5b00",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220602105919-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0f302b8f5ed4",
	                        "no-preload-20220602105919-2113"
	                    ],
	                    "NetworkID": "3c2378c45217e2c7578c492e90a28ef9e5cb0fc6dddada1c4f0cd94c3a99251d",
	                    "EndpointID": "bb93dd616d4cfa67b4ae9a6048cb6ef3c7afd76b858628906e4cf479d491f696",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220602105919-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220602105919-2113 logs -n 25: (2.958681581s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p calico-20220602104456-2113                     | calico-20220602104456-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:56 PDT | 02 Jun 22 10:56 PDT |
	| start   | -p false-20220602104455-2113                      | false-20220602104455-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:56 PDT | 02 Jun 22 10:56 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                           |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p false-20220602104455-2113                      | false-20220602104455-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:56 PDT | 02 Jun 22 10:56 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p false-20220602104455-2113                      | false-20220602104455-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:57 PDT | 02 Jun 22 10:57 PDT |
	| start   | -p bridge-20220602104455-2113                     | bridge-20220602104455-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:57 PDT | 02 Jun 22 10:57 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                           |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p bridge-20220602104455-2113                     | bridge-20220602104455-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:57 PDT | 02 Jun 22 10:57 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p bridge-20220602104455-2113                     | bridge-20220602104455-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	| delete  | -p cilium-20220602104456-2113                     | cilium-20220602104456-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	| start   | -p                                                | enable-default-cni-20220602104455-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	|         | enable-default-cni-20220602104455-2113            |                                           |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220602104455-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	|         | enable-default-cni-20220602104455-2113            |                                           |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| start   | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113               | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:59 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113               | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220602104455-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | enable-default-cni-20220602104455-2113            |                                           |         |                |                     |                     |
	| delete  | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113               | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	| delete  | -p                                                | disable-driver-mounts-20220602105918-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | disable-driver-mounts-20220602105918-2113         |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220602105906-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220602105906-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:04:50
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:04:50.212912   13778 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:04:50.213271   13778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:04:50.213277   13778 out.go:309] Setting ErrFile to fd 2...
	I0602 11:04:50.213283   13778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:04:50.213377   13778 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:04:50.213641   13778 out.go:303] Setting JSON to false
	I0602 11:04:50.229375   13778 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3859,"bootTime":1654189231,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:04:50.229480   13778 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:04:50.251550   13778 out.go:177] * [old-k8s-version-20220602105906-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:04:50.294147   13778 notify.go:193] Checking for updates...
	I0602 11:04:50.315087   13778 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:04:50.336129   13778 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:04:50.357034   13778 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:04:50.399144   13778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:04:50.420008   13778 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:04:50.457779   13778 config.go:178] Loaded profile config "old-k8s-version-20220602105906-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0602 11:04:50.480033   13778 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0602 11:04:50.516984   13778 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:04:50.590398   13778 docker.go:137] docker version: linux-20.10.14
	I0602 11:04:50.590521   13778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:04:50.717181   13778 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:04:50.66469354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:04:50.739075   13778 out.go:177] * Using the docker driver based on existing profile
	I0602 11:04:50.759620   13778 start.go:284] selected driver: docker
	I0602 11:04:50.759645   13778 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:04:50.759795   13778 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:04:50.763139   13778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:04:50.890034   13778 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:04:50.837983116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:04:50.890213   13778 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:04:50.890234   13778 cni.go:95] Creating CNI manager for ""
	I0602 11:04:50.890242   13778 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:04:50.890251   13778 start_flags.go:306] config:
	{Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:04:50.932780   13778 out.go:177] * Starting control plane node old-k8s-version-20220602105906-2113 in cluster old-k8s-version-20220602105906-2113
	I0602 11:04:50.953659   13778 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:04:50.974798   13778 out.go:177] * Pulling base image ...
	I0602 11:04:51.016700   13778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 11:04:51.016726   13778 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:04:51.016784   13778 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 11:04:51.016813   13778 cache.go:57] Caching tarball of preloaded images
	I0602 11:04:51.016994   13778 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:04:51.017034   13778 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0602 11:04:51.017938   13778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/config.json ...
	I0602 11:04:51.082281   13778 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:04:51.082299   13778 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:04:51.082308   13778 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:04:51.082351   13778 start.go:352] acquiring machines lock for old-k8s-version-20220602105906-2113: {Name:mk7f6a3ed7e2845a9fdc2d9a313dfa02067477c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:04:51.082434   13778 start.go:356] acquired machines lock for "old-k8s-version-20220602105906-2113" in 59.982µs
	I0602 11:04:51.082454   13778 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:04:51.082463   13778 fix.go:55] fixHost starting: 
	I0602 11:04:51.082690   13778 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Status}}
	I0602 11:04:51.150104   13778 fix.go:103] recreateIfNeeded on old-k8s-version-20220602105906-2113: state=Stopped err=<nil>
	W0602 11:04:51.150141   13778 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:04:51.171923   13778 out.go:177] * Restarting existing docker container for "old-k8s-version-20220602105906-2113" ...
	I0602 11:04:48.488292   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:50.518933   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:52.988519   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:51.192766   13778 cli_runner.go:164] Run: docker start old-k8s-version-20220602105906-2113
	I0602 11:04:51.562681   13778 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Status}}
	I0602 11:04:51.662883   13778 kic.go:416] container "old-k8s-version-20220602105906-2113" state is running.
	I0602 11:04:51.663452   13778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 11:04:51.737105   13778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/config.json ...
	I0602 11:04:51.737509   13778 machine.go:88] provisioning docker machine ...
	I0602 11:04:51.737549   13778 ubuntu.go:169] provisioning hostname "old-k8s-version-20220602105906-2113"
	I0602 11:04:51.737658   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:51.809463   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:51.809681   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:51.809694   13778 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220602105906-2113 && echo "old-k8s-version-20220602105906-2113" | sudo tee /etc/hostname
	I0602 11:04:51.932527   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220602105906-2113
	
	I0602 11:04:51.932606   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.004974   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.005104   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.005119   13778 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220602105906-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220602105906-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220602105906-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:04:52.121395   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:04:52.121423   13778 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:04:52.121455   13778 ubuntu.go:177] setting up certificates
	I0602 11:04:52.121472   13778 provision.go:83] configureAuth start
	I0602 11:04:52.121550   13778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 11:04:52.192336   13778 provision.go:138] copyHostCerts
	I0602 11:04:52.192420   13778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:04:52.192429   13778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:04:52.192520   13778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:04:52.192739   13778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:04:52.192752   13778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:04:52.192807   13778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:04:52.192939   13778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:04:52.192945   13778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:04:52.192998   13778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:04:52.193133   13778 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220602105906-2113 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220602105906-2113]
	I0602 11:04:52.320731   13778 provision.go:172] copyRemoteCerts
	I0602 11:04:52.320787   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:04:52.320827   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.392403   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:52.478826   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0602 11:04:52.497596   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:04:52.514656   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:04:52.533451   13778 provision.go:86] duration metric: configureAuth took 411.958536ms
	I0602 11:04:52.533463   13778 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:04:52.533626   13778 config.go:178] Loaded profile config "old-k8s-version-20220602105906-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0602 11:04:52.533686   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.603829   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.604076   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.604123   13778 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:04:52.720513   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:04:52.720529   13778 ubuntu.go:71] root file system type: overlay
	I0602 11:04:52.720687   13778 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:04:52.720759   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.791816   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.791987   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.792042   13778 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:04:52.916537   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:04:52.916616   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.986921   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.987077   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.987090   13778 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:04:53.105706   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:04:53.105744   13778 machine.go:91] provisioned docker machine in 1.368201682s
	I0602 11:04:53.105753   13778 start.go:306] post-start starting for "old-k8s-version-20220602105906-2113" (driver="docker")
	I0602 11:04:53.105759   13778 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:04:53.105828   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:04:53.105878   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.176368   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.262898   13778 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:04:53.266671   13778 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:04:53.266685   13778 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:04:53.266692   13778 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:04:53.266697   13778 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:04:53.266705   13778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:04:53.266812   13778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:04:53.266949   13778 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:04:53.267114   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:04:53.274148   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:04:53.291722   13778 start.go:309] post-start completed in 185.950644ms
	I0602 11:04:53.291805   13778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:04:53.291855   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.362608   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.445871   13778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:04:53.450173   13778 fix.go:57] fixHost completed within 2.367659825s
	I0602 11:04:53.450189   13778 start.go:81] releasing machines lock for "old-k8s-version-20220602105906-2113", held for 2.367704829s
	I0602 11:04:53.450271   13778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 11:04:53.521262   13778 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:04:53.521302   13778 ssh_runner.go:195] Run: systemctl --version
	I0602 11:04:53.521350   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.521351   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.597060   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.598923   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.810081   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:04:53.822458   13778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:04:53.832182   13778 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:04:53.832234   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:04:53.841612   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:04:53.854258   13778 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:04:53.920122   13778 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:04:53.988970   13778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:04:53.999075   13778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:04:54.067007   13778 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:04:54.076634   13778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:04:54.111937   13778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:04:54.188721   13778 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0602 11:04:54.188859   13778 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220602105906-2113 dig +short host.docker.internal
	I0602 11:04:54.320880   13778 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:04:54.320996   13778 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:04:54.325104   13778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:04:54.334818   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:54.405836   13778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 11:04:54.405911   13778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:04:54.436193   13778 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 11:04:54.436207   13778 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:04:54.436280   13778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:04:54.467205   13778 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 11:04:54.467227   13778 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:04:54.467299   13778 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:04:54.542038   13778 cni.go:95] Creating CNI manager for ""
	I0602 11:04:54.542049   13778 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:04:54.542067   13778 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:04:54.542080   13778 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220602105906-2113 NodeName:old-k8s-version-20220602105906-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:04:54.542186   13778 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220602105906-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220602105906-2113
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:04:54.542264   13778 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220602105906-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 11:04:54.542338   13778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0602 11:04:54.550328   13778 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:04:54.550378   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:04:54.557754   13778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0602 11:04:54.570217   13778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:04:54.583212   13778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0602 11:04:54.595430   13778 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:04:54.598973   13778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:04:54.608290   13778 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113 for IP: 192.168.49.2
	I0602 11:04:54.608396   13778 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:04:54.608444   13778 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:04:54.608525   13778 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/client.key
	I0602 11:04:54.608588   13778 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key.dd3b5fb2
	I0602 11:04:54.608636   13778 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key
	I0602 11:04:54.608843   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:04:54.608888   13778 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:04:54.608900   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:04:54.608937   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:04:54.608966   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:04:54.608997   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:04:54.609062   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:04:54.609636   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:04:54.626606   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 11:04:54.643214   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:04:54.660634   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:04:54.678739   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:04:54.701311   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:04:54.718932   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:04:54.736064   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:04:54.752603   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:04:54.771409   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:04:54.788319   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:04:54.805672   13778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:04:54.819496   13778 ssh_runner.go:195] Run: openssl version
	I0602 11:04:54.825123   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:04:54.832756   13778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:04:54.836487   13778 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:04:54.836529   13778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:04:54.841628   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:04:54.848799   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:04:54.856314   13778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:04:54.860364   13778 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:04:54.860406   13778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:04:54.865383   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:04:54.873566   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:04:54.881515   13778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:04:54.885348   13778 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:04:54.885384   13778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:04:54.890326   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:04:54.897388   13778 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:04:54.897507   13778 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:04:54.926275   13778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:04:54.933771   13778 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:04:54.933784   13778 kubeadm.go:626] restartCluster start
	I0602 11:04:54.933827   13778 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:04:54.941071   13778 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:54.941133   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:55.012069   13778 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220602105906-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:04:55.012243   13778 kubeconfig.go:127] "old-k8s-version-20220602105906-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:04:55.012551   13778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:04:55.013835   13778 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:04:55.021171   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.021223   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.029814   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:54.990253   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:57.490672   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:55.230022   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.239655   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.250586   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.430691   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.430839   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.443438   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.629918   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.630056   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.642922   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.830069   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.830146   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.839562   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.029929   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.030041   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.040636   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.230080   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.230187   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.240520   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.430805   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.430932   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.442009   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.630654   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.630783   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.641383   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.832024   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.832186   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.843733   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.030158   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.030295   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.040942   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.230556   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.230665   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.240962   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.430085   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.430185   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.440845   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.632018   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.632152   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.642712   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.832058   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.832177   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.842760   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:58.031624   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:58.031750   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:58.041861   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:58.041871   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:58.041914   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:58.050439   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:58.050451   13778 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:04:58.050460   13778 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:04:58.050517   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:04:58.078781   13778 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:04:58.088953   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:04:58.096401   13778 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5743 Jun  2 18:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5779 Jun  2 18:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5923 Jun  2 18:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jun  2 18:01 /etc/kubernetes/scheduler.conf
	
	I0602 11:04:58.096451   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 11:04:58.104096   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 11:04:58.111337   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 11:04:58.118781   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 11:04:58.125918   13778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:04:58.133559   13778 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:04:58.133572   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:58.183775   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:58.896537   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:59.102587   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:59.155939   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:59.209147   13778 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:04:59.209209   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:04:59.720023   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:04:59.491633   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:05:01.991937   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:05:02.483619   13525 pod_ready.go:81] duration metric: took 4m0.00409725s waiting for pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace to be "Ready" ...
	E0602 11:05:02.483644   13525 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace to be "Ready" (will not retry!)
	I0602 11:05:02.483698   13525 pod_ready.go:38] duration metric: took 4m15.053210658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:05:02.483739   13525 kubeadm.go:630] restartCluster took 4m24.531157256s
	W0602 11:05:02.483854   13525 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0602 11:05:02.483882   13525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:05:00.218628   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:00.717988   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:01.217869   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:01.720091   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:02.218009   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:02.720026   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:03.218005   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:03.719549   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:04.218269   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:04.720072   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:05.218193   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:05.719036   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:06.218362   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:06.718089   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:07.218191   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:07.720187   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:08.218889   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:08.720174   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:09.218254   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:09.718308   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:10.218927   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:10.718179   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:11.218634   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:11.720188   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:12.218325   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:12.718650   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:13.219209   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:13.718177   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:14.220264   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:14.720245   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:15.218876   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:15.718362   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:16.218196   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:16.720267   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:17.218738   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:17.720295   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:18.219639   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:18.719893   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:19.220307   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:19.718573   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:20.218881   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:20.718810   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:21.218435   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:21.720434   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:22.218420   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:22.720437   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:23.218341   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:23.718492   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:24.219768   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:24.718807   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:25.218974   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:25.720400   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:26.218669   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:26.720487   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:27.220515   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:27.720403   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:28.218730   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:28.718903   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:29.218531   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:29.720002   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:30.219067   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:30.720510   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:31.219729   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:31.720605   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:32.218956   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:32.720577   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:33.218952   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:33.720422   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:34.219560   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:34.718658   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:35.219547   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:35.720592   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:36.219099   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:36.719579   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:37.220649   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:37.718593   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:38.219903   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:38.719838   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:39.219406   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:39.718563   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:40.811222   13525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.326662804s)
	I0602 11:05:40.811281   13525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:05:40.821130   13525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:05:40.829186   13525 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:05:40.829233   13525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:05:40.836939   13525 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:05:40.836966   13525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:05:41.324890   13525 out.go:204]   - Generating certificates and keys ...
	I0602 11:05:42.371876   13525 out.go:204]   - Booting up control plane ...
	I0602 11:05:40.218840   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:40.718801   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:41.218646   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:41.720566   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:42.220521   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:42.718687   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:43.218743   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:43.719443   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:44.218763   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:44.718717   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:45.219727   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:45.719434   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:46.218669   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:46.719292   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:47.218839   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:47.720682   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:48.219900   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:48.718703   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:49.218731   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:49.718948   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:48.417839   13525 out.go:204]   - Configuring RBAC rules ...
	I0602 11:05:48.793207   13525 cni.go:95] Creating CNI manager for ""
	I0602 11:05:48.793218   13525 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:05:48.793246   13525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 11:05:48.793326   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=no-preload-20220602105919-2113 minikube.k8s.io/updated_at=2022_06_02T11_05_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:48.793330   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:48.802337   13525 ops.go:34] apiserver oom_adj: -16
	I0602 11:05:48.982807   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:49.547906   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:50.046418   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:50.547906   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:51.047017   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:51.546082   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:52.046799   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:52.545914   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:53.047211   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:50.219516   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:50.718836   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:51.218950   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:51.719045   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:52.220332   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:52.719306   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:53.219458   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:53.719131   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:54.219966   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:54.718927   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:53.546077   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:54.047664   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:54.545968   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:55.047870   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:55.546564   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:56.046044   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:56.546214   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:57.047968   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:57.546059   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:58.048016   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:55.219031   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:55.718981   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:56.220088   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:56.718966   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:57.219844   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:57.718981   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:58.221005   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:58.719195   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:59.220136   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:05:59.250806   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.250818   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:05:59.250893   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:05:59.280792   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.280803   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:05:59.280863   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:05:59.308900   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.308911   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:05:59.308972   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:05:59.337622   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.337634   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:05:59.337694   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:05:59.368293   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.368306   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:05:59.368364   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:05:59.396426   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.396439   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:05:59.396499   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:05:59.425726   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.425739   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:05:59.425795   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:05:59.454519   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.454531   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:05:59.454538   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:05:59.454547   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:05:59.466217   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:05:59.466232   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:05:59.517449   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:05:59.517462   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:05:59.517469   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:05:59.530200   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:05:59.530214   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:05:58.547423   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:59.046038   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:59.546458   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:00.046043   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:00.546396   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:01.046495   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:01.548087   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:01.621971   13525 kubeadm.go:1045] duration metric: took 12.828480532s to wait for elevateKubeSystemPrivileges.
	I0602 11:06:01.621988   13525 kubeadm.go:397] StartCluster complete in 5m23.705021941s
	I0602 11:06:01.622011   13525 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:06:01.622099   13525 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:06:01.622731   13525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:06:02.138987   13525 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220602105919-2113" rescaled to 1
	I0602 11:06:02.139031   13525 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 11:06:02.139040   13525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 11:06:02.139087   13525 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 11:06:02.161332   13525 out.go:177] * Verifying Kubernetes components...
	I0602 11:06:02.139277   13525 config.go:178] Loaded profile config "no-preload-20220602105919-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:06:02.161401   13525 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220602105919-2113"
	I0602 11:06:02.161401   13525 addons.go:65] Setting dashboard=true in profile "no-preload-20220602105919-2113"
	I0602 11:06:02.161402   13525 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220602105919-2113"
	I0602 11:06:02.161434   13525 addons.go:65] Setting metrics-server=true in profile "no-preload-20220602105919-2113"
	I0602 11:06:02.234564   13525 addons.go:153] Setting addon metrics-server=true in "no-preload-20220602105919-2113"
	I0602 11:06:02.234576   13525 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220602105919-2113"
	W0602 11:06:02.234584   13525 addons.go:165] addon metrics-server should already be in state true
	I0602 11:06:02.234586   13525 addons.go:153] Setting addon dashboard=true in "no-preload-20220602105919-2113"
	I0602 11:06:02.234620   13525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220602105919-2113"
	W0602 11:06:02.234632   13525 addons.go:165] addon dashboard should already be in state true
	I0602 11:06:02.234641   13525 host.go:66] Checking if "no-preload-20220602105919-2113" exists ...
	W0602 11:06:02.234595   13525 addons.go:165] addon storage-provisioner should already be in state true
	I0602 11:06:02.234674   13525 host.go:66] Checking if "no-preload-20220602105919-2113" exists ...
	I0602 11:06:02.234676   13525 host.go:66] Checking if "no-preload-20220602105919-2113" exists ...
	I0602 11:06:02.234601   13525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:06:02.235069   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.235135   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.235161   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.235952   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.243739   13525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 11:06:02.255736   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.387806   13525 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 11:06:02.348441   13525 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220602105919-2113"
	I0602 11:06:02.366429   13525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0602 11:06:02.387854   13525 addons.go:165] addon default-storageclass should already be in state true
	I0602 11:06:02.450797   13525 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 11:06:02.409034   13525 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:06:02.429992   13525 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 11:06:02.429998   13525 host.go:66] Checking if "no-preload-20220602105919-2113" exists ...
	I0602 11:06:02.442672   13525 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220602105919-2113" to be "Ready" ...
	I0602 11:06:02.471926   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 11:06:02.471926   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 11:06:02.492894   13525 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0602 11:06:02.472009   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.472018   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.472515   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.476345   13525 node_ready.go:49] node "no-preload-20220602105919-2113" has status "Ready":"True"
	I0602 11:06:02.513957   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 11:06:02.513974   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 11:06:02.513959   13525 node_ready.go:38] duration metric: took 42.017221ms waiting for node "no-preload-20220602105919-2113" to be "Ready" ...
	I0602 11:06:02.513998   13525 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:06:02.514072   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.521948   13525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-6m889" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:02.575900   13525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51942 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/no-preload-20220602105919-2113/id_rsa Username:docker}
	I0602 11:06:02.609147   13525 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 11:06:02.609164   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 11:06:02.609242   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.611033   13525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51942 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/no-preload-20220602105919-2113/id_rsa Username:docker}
	I0602 11:06:02.612643   13525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51942 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/no-preload-20220602105919-2113/id_rsa Username:docker}
	I0602 11:06:02.685895   13525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51942 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/no-preload-20220602105919-2113/id_rsa Username:docker}
	I0602 11:06:02.695658   13525 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 11:06:02.695670   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 11:06:02.714463   13525 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 11:06:02.714475   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 11:06:02.779112   13525 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:06:02.779128   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 11:06:02.785059   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 11:06:02.785076   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 11:06:02.794962   13525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:06:02.800989   13525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:06:02.807170   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 11:06:02.807188   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 11:06:02.893079   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 11:06:02.893100   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 11:06:02.974206   13525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 11:06:03.086155   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 11:06:03.086169   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 11:06:03.194757   13525 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 11:06:03.210171   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 11:06:03.210183   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 11:06:03.375843   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 11:06:03.375859   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 11:06:03.572646   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 11:06:03.572663   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 11:06:03.609224   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 11:06:03.609244   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 11:06:03.680445   13525 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220602105919-2113"
	I0602 11:06:03.689077   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:06:03.689093   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 11:06:03.772979   13525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:06:04.535326   13525 pod_ready.go:102] pod "coredns-64897985d-6m889" in "kube-system" namespace has status "Ready":"False"
	I0602 11:06:04.785766   13525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.012719524s)
	I0602 11:06:04.809002   13525 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0602 11:06:01.585281   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055020437s)
	I0602 11:06:01.585394   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:01.585402   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:04.133605   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:04.221004   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:04.251619   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.251631   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:04.251691   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:04.292078   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.292092   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:04.292154   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:04.339824   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.339842   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:04.339915   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:04.377243   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.377271   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:04.377353   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:04.408245   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.408257   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:04.408326   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:04.441761   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.441772   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:04.441834   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:04.471465   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.471482   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:04.471551   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:04.507089   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.507101   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:04.507107   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:04.507115   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:04.522059   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:04.522082   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:04.592918   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:04.592943   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:04.592954   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:04.609191   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:04.609209   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:04.882616   13525 addons.go:417] enableAddons completed in 2.743486245s
	I0602 11:06:07.035944   13525 pod_ready.go:102] pod "coredns-64897985d-6m889" in "kube-system" namespace has status "Ready":"False"
	I0602 11:06:08.567745   13525 pod_ready.go:92] pod "coredns-64897985d-6m889" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.567758   13525 pod_ready.go:81] duration metric: took 6.045686452s waiting for pod "coredns-64897985d-6m889" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.567764   13525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-vnxnm" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.572156   13525 pod_ready.go:92] pod "coredns-64897985d-vnxnm" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.572165   13525 pod_ready.go:81] duration metric: took 4.396943ms waiting for pod "coredns-64897985d-vnxnm" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.572172   13525 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.580106   13525 pod_ready.go:92] pod "etcd-no-preload-20220602105919-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.580116   13525 pod_ready.go:81] duration metric: took 7.939202ms waiting for pod "etcd-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.580124   13525 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.585011   13525 pod_ready.go:92] pod "kube-apiserver-no-preload-20220602105919-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.585021   13525 pod_ready.go:81] duration metric: took 4.892259ms waiting for pod "kube-apiserver-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.585027   13525 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.589734   13525 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220602105919-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.589743   13525 pod_ready.go:81] duration metric: took 4.7114ms waiting for pod "kube-controller-manager-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.589749   13525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cjctl" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.933111   13525 pod_ready.go:92] pod "kube-proxy-cjctl" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.933120   13525 pod_ready.go:81] duration metric: took 343.36086ms waiting for pod "kube-proxy-cjctl" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.933126   13525 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:09.333186   13525 pod_ready.go:92] pod "kube-scheduler-no-preload-20220602105919-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:09.333196   13525 pod_ready.go:81] duration metric: took 400.05867ms waiting for pod "kube-scheduler-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:09.333203   13525 pod_ready.go:38] duration metric: took 6.819069303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:06:09.333216   13525 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:06:09.333264   13525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:09.372834   13525 api_server.go:71] duration metric: took 7.233658751s to wait for apiserver process to appear ...
	I0602 11:06:09.372853   13525 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:06:09.372862   13525 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51941/healthz ...
	I0602 11:06:09.378487   13525 api_server.go:266] https://127.0.0.1:51941/healthz returned 200:
	ok
	I0602 11:06:09.379772   13525 api_server.go:140] control plane version: v1.23.6
	I0602 11:06:09.379782   13525 api_server.go:130] duration metric: took 6.923741ms to wait for apiserver health ...
	I0602 11:06:09.379786   13525 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:06:09.540922   13525 system_pods.go:59] 9 kube-system pods found
	I0602 11:06:09.540950   13525 system_pods.go:61] "coredns-64897985d-6m889" [1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4] Running
	I0602 11:06:09.540958   13525 system_pods.go:61] "coredns-64897985d-vnxnm" [87f4263e-9841-4bc8-9d4b-b54296061d0e] Running
	I0602 11:06:09.540965   13525 system_pods.go:61] "etcd-no-preload-20220602105919-2113" [c842274b-e6bc-4c21-892a-f22388d2fb25] Running
	I0602 11:06:09.540973   13525 system_pods.go:61] "kube-apiserver-no-preload-20220602105919-2113" [1b8f4bdd-c9af-47a9-a60a-fd91abef1b9d] Running
	I0602 11:06:09.540986   13525 system_pods.go:61] "kube-controller-manager-no-preload-20220602105919-2113" [36d6cfb1-3dd4-485b-9962-225a493ddb0a] Running
	I0602 11:06:09.540992   13525 system_pods.go:61] "kube-proxy-cjctl" [79eff5cb-2888-4f02-8072-f0b91b7ae18a] Running
	I0602 11:06:09.540997   13525 system_pods.go:61] "kube-scheduler-no-preload-20220602105919-2113" [b928d458-fdaa-4c7f-9440-45548309d5f6] Running
	I0602 11:06:09.541007   13525 system_pods.go:61] "metrics-server-b955d9d8-mt94g" [3ff97994-84e1-48cd-9935-128402ff47c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:06:09.541017   13525 system_pods.go:61] "storage-provisioner" [45aca05a-d370-433b-a31d-c5af9b987ae1] Running
	I0602 11:06:09.541021   13525 system_pods.go:74] duration metric: took 161.227922ms to wait for pod list to return data ...
	I0602 11:06:09.541025   13525 default_sa.go:34] waiting for default service account to be created ...
	I0602 11:06:09.733006   13525 default_sa.go:45] found service account: "default"
	I0602 11:06:09.733018   13525 default_sa.go:55] duration metric: took 191.98523ms for default service account to be created ...
	I0602 11:06:09.733024   13525 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 11:06:09.956582   13525 system_pods.go:86] 9 kube-system pods found
	I0602 11:06:09.956595   13525 system_pods.go:89] "coredns-64897985d-6m889" [1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4] Running
	I0602 11:06:09.956600   13525 system_pods.go:89] "coredns-64897985d-vnxnm" [87f4263e-9841-4bc8-9d4b-b54296061d0e] Running
	I0602 11:06:09.956603   13525 system_pods.go:89] "etcd-no-preload-20220602105919-2113" [c842274b-e6bc-4c21-892a-f22388d2fb25] Running
	I0602 11:06:09.956613   13525 system_pods.go:89] "kube-apiserver-no-preload-20220602105919-2113" [1b8f4bdd-c9af-47a9-a60a-fd91abef1b9d] Running
	I0602 11:06:09.956617   13525 system_pods.go:89] "kube-controller-manager-no-preload-20220602105919-2113" [36d6cfb1-3dd4-485b-9962-225a493ddb0a] Running
	I0602 11:06:09.956624   13525 system_pods.go:89] "kube-proxy-cjctl" [79eff5cb-2888-4f02-8072-f0b91b7ae18a] Running
	I0602 11:06:09.956628   13525 system_pods.go:89] "kube-scheduler-no-preload-20220602105919-2113" [b928d458-fdaa-4c7f-9440-45548309d5f6] Running
	I0602 11:06:09.956635   13525 system_pods.go:89] "metrics-server-b955d9d8-mt94g" [3ff97994-84e1-48cd-9935-128402ff47c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:06:09.956641   13525 system_pods.go:89] "storage-provisioner" [45aca05a-d370-433b-a31d-c5af9b987ae1] Running
	I0602 11:06:09.956645   13525 system_pods.go:126] duration metric: took 223.614368ms to wait for k8s-apps to be running ...
	I0602 11:06:09.956651   13525 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 11:06:09.956697   13525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:06:09.969456   13525 system_svc.go:56] duration metric: took 12.798458ms WaitForService to wait for kubelet.
	I0602 11:06:09.969478   13525 kubeadm.go:572] duration metric: took 7.830295092s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 11:06:09.969495   13525 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:06:10.134370   13525 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:06:10.134383   13525 node_conditions.go:123] node cpu capacity is 6
	I0602 11:06:10.134390   13525 node_conditions.go:105] duration metric: took 164.887939ms to run NodePressure ...
	I0602 11:06:10.134397   13525 start.go:213] waiting for startup goroutines ...
	I0602 11:06:10.164528   13525 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 11:06:10.196610   13525 out.go:177] * Done! kubectl is now configured to use "no-preload-20220602105919-2113" cluster and "default" namespace by default
	I0602 11:06:06.668244   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058988448s)
	I0602 11:06:06.668353   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:06.668361   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:09.209694   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:09.719071   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:09.751868   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.751881   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:09.751941   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:09.782377   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.782387   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:09.782461   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:09.812852   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.812866   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:09.812927   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:09.841271   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.841287   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:09.841355   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:09.869322   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.869337   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:09.869404   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:09.904831   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.904845   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:09.904924   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:09.935441   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.935452   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:09.935513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:09.971502   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.971513   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:09.971520   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:09.971526   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:09.984595   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:09.984608   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:12.040057   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055401538s)
	I0602 11:06:12.040168   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:12.040175   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:12.084908   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:12.084928   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:12.099657   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:12.099674   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:12.176399   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:14.677774   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:14.719277   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:14.749284   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.749296   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:14.749352   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:14.779602   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.779617   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:14.779692   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:14.810304   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.810315   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:14.810375   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:14.840825   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.840837   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:14.840895   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:14.871176   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.871189   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:14.871245   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:14.899620   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.899632   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:14.899690   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:14.928084   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.928098   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:14.928152   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:14.958074   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.958086   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:14.958093   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:14.958100   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:14.998133   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:14.998148   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:15.010030   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:15.010044   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:15.062993   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:15.063012   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:15.063020   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:15.074991   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:15.075002   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:17.150624   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.075573411s)
	I0602 11:06:19.651352   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:19.721185   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:19.753754   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.753767   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:19.753824   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:19.785309   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.785320   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:19.785375   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:19.815519   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.815532   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:19.815592   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:19.844388   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.844403   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:19.844460   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:19.874394   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.874405   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:19.874463   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:19.903563   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.903575   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:19.903636   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:19.932385   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.932397   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:19.932455   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:19.961585   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.961597   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:19.961604   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:19.961611   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:20.002244   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:20.002257   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:20.014432   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:20.014446   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:20.076253   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:20.076266   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:20.076274   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:20.088518   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:20.088530   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:22.145216   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056638404s)
	I0602 11:06:24.646167   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:24.719768   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:24.751355   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.751366   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:24.751429   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:24.782962   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.782973   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:24.783035   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:24.813990   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.814003   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:24.814058   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:24.848961   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.848974   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:24.849032   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:24.878730   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.878742   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:24.878798   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:24.906982   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.906994   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:24.907050   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:24.938955   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.938968   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:24.939036   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:24.970095   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.970109   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:24.970122   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:24.970131   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:25.015415   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:25.015429   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:25.027601   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:25.027615   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:25.079664   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:25.079676   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:25.079685   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:25.091626   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:25.091642   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:27.149516   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057826153s)
	I0602 11:06:29.650792   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:29.721431   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:29.752590   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.752602   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:29.752682   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:29.781730   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.781745   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:29.781812   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:29.811830   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.811842   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:29.811899   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:29.844830   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.844842   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:29.844906   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:29.874059   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.874074   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:29.874138   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:29.903122   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.903134   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:29.903203   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:29.931909   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.931920   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:29.931981   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:29.959768   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.959780   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:29.959787   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:29.959793   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:29.971640   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:29.971654   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:32.025610   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053903096s)
	I0602 11:06:32.025734   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:32.025742   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:32.066635   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:32.066655   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:32.078867   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:32.078880   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:32.133725   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:34.634284   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:34.721701   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:34.751984   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.751995   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:34.752050   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:34.779859   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.779872   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:34.779929   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:34.809891   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.809902   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:34.809967   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:34.838099   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.838111   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:34.838170   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:34.866657   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.866673   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:34.866736   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:34.895965   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.895980   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:34.896037   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:34.924358   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.924371   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:34.924427   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:34.954617   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.954628   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:34.954635   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:34.954646   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:34.992693   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:34.992705   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:35.005024   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:35.005041   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:35.061106   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:35.061116   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:35.061122   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:35.073095   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:35.073107   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:37.128746   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055582995s)
	I0602 11:06:39.629638   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:39.719744   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:39.751161   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.751172   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:39.751233   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:39.780249   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.780261   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:39.780319   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:39.809191   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.809204   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:39.809259   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:39.837277   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.837288   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:39.837354   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:39.865911   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.865922   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:39.865977   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:39.894428   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.894440   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:39.894508   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:39.923609   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.923621   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:39.923681   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:39.952594   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.952606   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:39.952613   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:39.952631   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:42.012619   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059940213s)
	I0602 11:06:42.012752   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:42.012763   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:42.051824   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:42.051860   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:42.064028   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:42.064044   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:42.116407   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:42.116419   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:42.116429   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:44.630691   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:44.720202   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:44.753527   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.753540   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:44.753594   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:44.783807   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.783820   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:44.783877   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:44.815087   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.815101   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:44.815157   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:44.855143   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.855157   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:44.855211   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:44.884114   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.884126   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:44.884184   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:44.912516   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.912529   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:44.912586   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:44.942078   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.942090   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:44.942144   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:44.973360   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.973371   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:44.973378   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:44.973384   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:45.013557   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:45.013572   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:45.024888   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:45.024900   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:45.077791   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:45.077807   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:45.077815   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:45.089614   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:45.089626   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:47.143631   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053956953s)
	I0602 11:06:49.645446   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:49.720524   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:49.751916   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.751928   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:49.751985   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:49.781581   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.781593   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:49.781650   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:49.811063   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.811076   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:49.811131   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:49.839799   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.839812   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:49.839870   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:49.868670   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.868683   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:49.868741   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:49.897111   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.897125   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:49.897187   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:49.926696   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.926708   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:49.926765   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:49.955084   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.955097   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:49.955103   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:49.955110   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:50.010000   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:50.010012   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:50.010021   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:50.022044   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:50.022057   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:52.079829   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057724742s)
	I0602 11:06:52.079935   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:52.079942   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:52.119564   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:52.119577   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:54.633352   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:54.721975   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:54.753327   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.753339   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:54.753394   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:54.782146   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.782158   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:54.782214   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:54.810970   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.810983   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:54.811029   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:54.842645   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.842665   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:54.842725   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:54.871490   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.871502   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:54.871556   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:54.900472   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.900483   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:54.900541   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:54.929112   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.929124   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:54.929182   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:54.958837   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.958849   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:54.958857   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:54.958866   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:54.998335   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:54.998348   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:55.009734   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:55.009746   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:55.062791   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:55.062801   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:55.062808   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:55.074548   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:55.074559   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:00:34 UTC, end at Thu 2022-06-02 18:07:00 UTC. --
	Jun 02 18:05:18 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:18.915623748Z" level=info msg="ignoring event" container=499ba5c94ed7b70d30dc2fbac62484edd40230d62be6140029003b5a687f5473 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:19 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:19.012948667Z" level=info msg="ignoring event" container=234bd3928f74c0496850635321019946301fada193cf1a4a6905bae65c4d72c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:29 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:29.130509153Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1c94f52b981c75d74723f113ef1d18e7faebec2ff758c6250e8ae90a7418566d
	Jun 02 18:05:29 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:29.158881900Z" level=info msg="ignoring event" container=1c94f52b981c75d74723f113ef1d18e7faebec2ff758c6250e8ae90a7418566d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.247251413Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1190fa5e12d71dfba8d50a719bce4231bac81bc59e465d65e7a839b4a4394d5d
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.300308800Z" level=info msg="ignoring event" container=1190fa5e12d71dfba8d50a719bce4231bac81bc59e465d65e7a839b4a4394d5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.420803845Z" level=info msg="ignoring event" container=7b87068dd668db059b1659af95c1ebac44d8cfec1987b392fdada3a2ec5390f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.533424308Z" level=info msg="ignoring event" container=1001748b761c246792ebe69031bd1d8cebf4555fc9152b2cbeb357bff8ff37b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.638099844Z" level=info msg="ignoring event" container=2bd7a2b0c7ef1d440825b5570bf51468e988959801e0dfbfc1acfb127a1638ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.742392954Z" level=info msg="ignoring event" container=bbc63dc9ebd9cc751b9ea1f86ccfccb4cbd79124b15a2ae19ec1167ecfcddb75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.843012042Z" level=info msg="ignoring event" container=08fd7fb9075176794027b1e9a6d0174ba97cc0d1c6c0b5760c1598518837adbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.958892225Z" level=info msg="ignoring event" container=04d6c920b3b262adc1633d6e1412ed4b2ec7c4f3b821d434d1e12bcda21e2959 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:04 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:04.946363688Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:04 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:04.946404495Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:04 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:04.947611594Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:05 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:05.811788282Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 02 18:06:10 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:10.950613307Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:06:11 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:11.173460161Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:06:13 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:13.835268383Z" level=info msg="ignoring event" container=84cae9ad8db493d57f8f26231f91497ac994ee7e5dda8e1be19f7ece4287f7c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:14 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:14.006788654Z" level=info msg="ignoring event" container=5e1df4966cd471a089ab1c68570784dfda5ea0e6b3c470749bf67c44256a97d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:14 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:14.220479470Z" level=info msg="ignoring event" container=a3d8f61758db16903b1c4bcc316978273404b6db5ee32fcaff23a6fd6eef58d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:15 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:15.240643394Z" level=info msg="ignoring event" container=d327b1348eba62c457259a4fca6b6aaed27896904c18fd2f08d4862fa034693c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:18 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:18.003479057Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:18 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:18.003530194Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:18 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:18.004980323Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	d327b1348eba6       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   1                   e88e8ced257dc
	d876cddda7250       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   50 seconds ago       Running             kubernetes-dashboard        0                   c9774b0435813
	aea3e9eb80c2b       6e38f40d628db                                                                                    56 seconds ago       Running             storage-provisioner         0                   244468690c915
	248c6e4cf3927       a4ca41631cc7a                                                                                    57 seconds ago       Running             coredns                     0                   34d7e0e53727b
	f63aafca1604c       4c03754524064                                                                                    57 seconds ago       Running             kube-proxy                  0                   4118ece26bcaf
	d1811367575d1       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   2a0dc1c83b358
	6c116a3954881       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   12b7326e3f111
	4bae480b79dc1       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   5434de364e1d8
	063f469ac3db3       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   395cfd8f46679
	
	* 
	* ==> coredns [248c6e4cf392] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220602105919-2113
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220602105919-2113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=no-preload-20220602105919-2113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T11_05_48_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 18:05:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220602105919-2113
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 18:06:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 18:06:58 +0000   Thu, 02 Jun 2022 18:05:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 18:06:58 +0000   Thu, 02 Jun 2022 18:05:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 18:06:58 +0000   Thu, 02 Jun 2022 18:05:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 18:06:58 +0000   Thu, 02 Jun 2022 18:06:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    no-preload-20220602105919-2113
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                535efd20-df3b-41c1-a9d6-c3f0fbb7439d
	  Boot ID:                    a475dd08-72ba-4c6d-89c1-75a58adc3783
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-6m889                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     58s
	  kube-system                 etcd-no-preload-20220602105919-2113                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-no-preload-20220602105919-2113             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-no-preload-20220602105919-2113    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-cjctl                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-no-preload-20220602105919-2113             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 metrics-server-b955d9d8-mt94g                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         57s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-cj9rj                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-mzc2x                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 56s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    78s (x4 over 78s)  kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x4 over 78s)  kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s (x4 over 78s)  kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientMemory
	  Normal  Starting                 72s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s                kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                61s                kubelet     Node no-preload-20220602105919-2113 status is now: NodeReady
	  Normal  Starting                 3s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [4bae480b79dc] <==
	* {"level":"info","ts":"2022-06-02T18:05:43.631Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-02T18:05:43.630Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:no-preload-20220602105919-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:05:44.529Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:05:44.529Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:05:44.529Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T18:05:44.529Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  18:07:00 up 55 min,  0 users,  load average: 0.46, 0.74, 1.09
	Linux no-preload-20220602105919-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [d1811367575d] <==
	* I0602 18:05:47.025859       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 18:05:47.032202       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0602 18:05:47.035386       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0602 18:05:47.035415       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0602 18:05:47.320536       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 18:05:47.344390       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 18:05:47.411487       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0602 18:05:47.415446       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0602 18:05:47.416301       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 18:05:47.418923       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 18:05:48.175020       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 18:05:48.624764       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 18:05:48.632739       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0602 18:05:48.640361       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 18:05:48.794805       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 18:06:01.763118       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 18:06:01.863216       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 18:06:03.613945       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.1.35]
	I0602 18:06:03.890710       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	W0602 18:06:04.404143       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:06:04.404214       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:06:04.404220       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0602 18:06:04.713461       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.171.174]
	I0602 18:06:04.780233       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.105.110.138]
	
	* 
	* ==> kube-controller-manager [063f469ac3db] <==
	* I0602 18:06:02.042783       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-vnxnm"
	I0602 18:06:03.398757       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 18:06:03.407241       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0602 18:06:03.472778       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0602 18:06:03.483158       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-mt94g"
	I0602 18:06:04.581467       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 18:06:04.587908       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:06:04.593214       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	E0602 18:06:04.593364       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.595339       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.603313       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0602 18:06:04.603942       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.603968       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.607829       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.607882       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.610397       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.610448       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.617943       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.618029       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:06:04.620193       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.620203       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.674840       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-cj9rj"
	I0602 18:06:04.679726       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-mzc2x"
	E0602 18:06:57.457623       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 18:06:57.465408       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f63aafca1604] <==
	* I0602 18:06:03.793941       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:06:03.794001       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:06:03.794044       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:06:03.887113       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:06:03.887135       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:06:03.887142       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:06:03.887156       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:06:03.887623       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:06:03.888098       1 config.go:317] "Starting service config controller"
	I0602 18:06:03.888155       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:06:03.888177       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:06:03.888180       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:06:03.989342       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 18:06:03.989362       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6c116a395488] <==
	* W0602 18:05:46.108182       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 18:05:46.108227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 18:05:46.108306       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 18:05:46.108355       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 18:05:46.109168       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:05:46.109213       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:05:46.109417       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 18:05:46.109480       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 18:05:46.109554       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0602 18:05:46.109595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0602 18:05:46.109417       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 18:05:46.109783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 18:05:46.925620       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 18:05:46.925661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 18:05:46.970835       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 18:05:46.970905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0602 18:05:47.011635       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:05:47.011711       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:05:47.074012       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0602 18:05:47.074049       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0602 18:05:47.090067       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 18:05:47.090112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 18:05:47.219201       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 18:05:47.219241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0602 18:05:47.603603       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:00:34 UTC, end at Thu 2022-06-02 18:07:01 UTC. --
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.041851    7198 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.042165    7198 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.042231    7198 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.042283    7198 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.042364    7198 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064419    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79eff5cb-2888-4f02-8072-f0b91b7ae18a-kube-proxy\") pod \"kube-proxy-cjctl\" (UID: \"79eff5cb-2888-4f02-8072-f0b91b7ae18a\") " pod="kube-system/kube-proxy-cjctl"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064475    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3ff97994-84e1-48cd-9935-128402ff47c0-tmp-dir\") pod \"metrics-server-b955d9d8-mt94g\" (UID: \"3ff97994-84e1-48cd-9935-128402ff47c0\") " pod="kube-system/metrics-server-b955d9d8-mt94g"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064498    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4-config-volume\") pod \"coredns-64897985d-6m889\" (UID: \"1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4\") " pod="kube-system/coredns-64897985d-6m889"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064547    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/937d38bc-b2d7-4a95-ad97-cb199dfd5ef8-tmp-volume\") pod \"kubernetes-dashboard-cd7c84bfc-mzc2x\" (UID: \"937d38bc-b2d7-4a95-ad97-cb199dfd5ef8\") " pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-mzc2x"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064565    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/45aca05a-d370-433b-a31d-c5af9b987ae1-tmp\") pod \"storage-provisioner\" (UID: \"45aca05a-d370-433b-a31d-c5af9b987ae1\") " pod="kube-system/storage-provisioner"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064612    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrd47\" (UniqueName: \"kubernetes.io/projected/3ff97994-84e1-48cd-9935-128402ff47c0-kube-api-access-xrd47\") pod \"metrics-server-b955d9d8-mt94g\" (UID: \"3ff97994-84e1-48cd-9935-128402ff47c0\") " pod="kube-system/metrics-server-b955d9d8-mt94g"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064667    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kc5\" (UniqueName: \"kubernetes.io/projected/42655025-9d8f-4b9d-9b4f-e57da0c9771b-kube-api-access-r7kc5\") pod \"dashboard-metrics-scraper-56974995fc-cj9rj\" (UID: \"42655025-9d8f-4b9d-9b4f-e57da0c9771b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-cj9rj"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064698    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kvgh\" (UniqueName: \"kubernetes.io/projected/45aca05a-d370-433b-a31d-c5af9b987ae1-kube-api-access-4kvgh\") pod \"storage-provisioner\" (UID: \"45aca05a-d370-433b-a31d-c5af9b987ae1\") " pod="kube-system/storage-provisioner"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064718    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79eff5cb-2888-4f02-8072-f0b91b7ae18a-xtables-lock\") pod \"kube-proxy-cjctl\" (UID: \"79eff5cb-2888-4f02-8072-f0b91b7ae18a\") " pod="kube-system/kube-proxy-cjctl"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064744    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25l5\" (UniqueName: \"kubernetes.io/projected/79eff5cb-2888-4f02-8072-f0b91b7ae18a-kube-api-access-k25l5\") pod \"kube-proxy-cjctl\" (UID: \"79eff5cb-2888-4f02-8072-f0b91b7ae18a\") " pod="kube-system/kube-proxy-cjctl"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064796    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdft\" (UniqueName: \"kubernetes.io/projected/1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4-kube-api-access-jsdft\") pod \"coredns-64897985d-6m889\" (UID: \"1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4\") " pod="kube-system/coredns-64897985d-6m889"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064818    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/42655025-9d8f-4b9d-9b4f-e57da0c9771b-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-cj9rj\" (UID: \"42655025-9d8f-4b9d-9b4f-e57da0c9771b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-cj9rj"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064834    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6wj\" (UniqueName: \"kubernetes.io/projected/937d38bc-b2d7-4a95-ad97-cb199dfd5ef8-kube-api-access-jw6wj\") pod \"kubernetes-dashboard-cd7c84bfc-mzc2x\" (UID: \"937d38bc-b2d7-4a95-ad97-cb199dfd5ef8\") " pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-mzc2x"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064937    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79eff5cb-2888-4f02-8072-f0b91b7ae18a-lib-modules\") pod \"kube-proxy-cjctl\" (UID: \"79eff5cb-2888-4f02-8072-f0b91b7ae18a\") " pod="kube-system/kube-proxy-cjctl"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064953    7198 reconciler.go:157] "Reconciler: start to sync state"
	Jun 02 18:07:00 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:00.238186    7198 request.go:665] Waited for 1.191596336s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 02 18:07:00 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:00.336436    7198 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220602105919-2113\" already exists" pod="kube-system/kube-scheduler-no-preload-20220602105919-2113"
	Jun 02 18:07:00 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:00.454280    7198 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220602105919-2113\" already exists" pod="kube-system/etcd-no-preload-20220602105919-2113"
	Jun 02 18:07:00 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:00.720612    7198 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220602105919-2113\" already exists" pod="kube-system/kube-apiserver-no-preload-20220602105919-2113"
	Jun 02 18:07:01 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:01.143684    7198 scope.go:110] "RemoveContainer" containerID="d327b1348eba62c457259a4fca6b6aaed27896904c18fd2f08d4862fa034693c"
	
	* 
	* ==> kubernetes-dashboard [d876cddda725] <==
	* 2022/06/02 18:06:10 Using namespace: kubernetes-dashboard
	2022/06/02 18:06:10 Using in-cluster config to connect to apiserver
	2022/06/02 18:06:10 Using secret token for csrf signing
	2022/06/02 18:06:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/02 18:06:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/02 18:06:10 Successful initial request to the apiserver, version: v1.23.6
	2022/06/02 18:06:10 Generating JWE encryption key
	2022/06/02 18:06:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/02 18:06:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/02 18:06:10 Initializing JWE encryption key from synchronized object
	2022/06/02 18:06:10 Creating in-cluster Sidecar client
	2022/06/02 18:06:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:06:10 Serving insecurely on HTTP port: 9090
	2022/06/02 18:06:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:06:10 Starting overwatch
	
	* 
	* ==> storage-provisioner [aea3e9eb80c2] <==
	* I0602 18:06:04.470354       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:06:04.480832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:06:04.480890       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:06:04.487145       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:06:04.487293       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220602105919-2113_b6594a44-c79c-4c71-a29a-ea67307901dd!
	I0602 18:06:04.487919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d815899-92a0-47d9-b0da-6cf8c36f4375", APIVersion:"v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220602105919-2113_b6594a44-c79c-4c71-a29a-ea67307901dd became leader
	I0602 18:06:04.589740       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220602105919-2113_b6594a44-c79c-4c71-a29a-ea67307901dd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220602105919-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-mt94g
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220602105919-2113 describe pod metrics-server-b955d9d8-mt94g
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220602105919-2113 describe pod metrics-server-b955d9d8-mt94g: exit status 1 (264.136606ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-mt94g" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220602105919-2113 describe pod metrics-server-b955d9d8-mt94g: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220602105919-2113
helpers_test.go:235: (dbg) docker inspect no-preload-20220602105919-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5",
	        "Created": "2022-06-02T17:59:21.591842343Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196542,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:00:34.496508242Z",
	            "FinishedAt": "2022-06-02T18:00:32.564133952Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5/hosts",
	        "LogPath": "/var/lib/docker/containers/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5/0f302b8f5ed4f65dea1d8d45928555b315a56de9771f21132d6426321d5903b5-json.log",
	        "Name": "/no-preload-20220602105919-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220602105919-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220602105919-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/287ef2386cddf654c60473953eb4d199390f0f320ba1a6f14be9176b6967da70-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/287ef2386cddf654c60473953eb4d199390f0f320ba1a6f14be9176b6967da70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/287ef2386cddf654c60473953eb4d199390f0f320ba1a6f14be9176b6967da70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/287ef2386cddf654c60473953eb4d199390f0f320ba1a6f14be9176b6967da70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220602105919-2113",
	                "Source": "/var/lib/docker/volumes/no-preload-20220602105919-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220602105919-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220602105919-2113",
	                "name.minikube.sigs.k8s.io": "no-preload-20220602105919-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f729207e5b0076152390f1cd3165dbbba90e1ecf3b17b940305e6db29b07c08c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51942"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51939"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51940"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51941"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f729207e5b00",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220602105919-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0f302b8f5ed4",
	                        "no-preload-20220602105919-2113"
	                    ],
	                    "NetworkID": "3c2378c45217e2c7578c492e90a28ef9e5cb0fc6dddada1c4f0cd94c3a99251d",
	                    "EndpointID": "bb93dd616d4cfa67b4ae9a6048cb6ef3c7afd76b858628906e4cf479d491f696",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220602105919-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220602105919-2113 logs -n 25: (2.808539022s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p false-20220602104455-2113                      | false-20220602104455-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:56 PDT | 02 Jun 22 10:56 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                           |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p false-20220602104455-2113                      | false-20220602104455-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:56 PDT | 02 Jun 22 10:56 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p false-20220602104455-2113                      | false-20220602104455-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:57 PDT | 02 Jun 22 10:57 PDT |
	| start   | -p bridge-20220602104455-2113                     | bridge-20220602104455-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:57 PDT | 02 Jun 22 10:57 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                           |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p bridge-20220602104455-2113                     | bridge-20220602104455-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:57 PDT | 02 Jun 22 10:57 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p bridge-20220602104455-2113                     | bridge-20220602104455-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	| delete  | -p cilium-20220602104456-2113                     | cilium-20220602104456-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	| start   | -p                                                | enable-default-cni-20220602104455-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	|         | enable-default-cni-20220602104455-2113            |                                           |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220602104455-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:58 PDT |
	|         | enable-default-cni-20220602104455-2113            |                                           |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| start   | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113               | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:58 PDT | 02 Jun 22 10:59 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113               | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220602104455-2113    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | enable-default-cni-20220602104455-2113            |                                           |         |                |                     |                     |
	| delete  | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113               | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	| delete  | -p                                                | disable-driver-mounts-20220602105918-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | disable-driver-mounts-20220602105918-2113         |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220602105906-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220602105906-2113       | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| logs    | no-preload-20220602105919-2113                    | no-preload-20220602105919-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:04:50
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:04:50.212912   13778 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:04:50.213271   13778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:04:50.213277   13778 out.go:309] Setting ErrFile to fd 2...
	I0602 11:04:50.213283   13778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:04:50.213377   13778 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:04:50.213641   13778 out.go:303] Setting JSON to false
	I0602 11:04:50.229375   13778 start.go:115] hostinfo: {"hostname":"37309.local","uptime":3859,"bootTime":1654189231,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:04:50.229480   13778 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:04:50.251550   13778 out.go:177] * [old-k8s-version-20220602105906-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:04:50.294147   13778 notify.go:193] Checking for updates...
	I0602 11:04:50.315087   13778 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:04:50.336129   13778 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:04:50.357034   13778 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:04:50.399144   13778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:04:50.420008   13778 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:04:50.457779   13778 config.go:178] Loaded profile config "old-k8s-version-20220602105906-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0602 11:04:50.480033   13778 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0602 11:04:50.516984   13778 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:04:50.590398   13778 docker.go:137] docker version: linux-20.10.14
	I0602 11:04:50.590521   13778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:04:50.717181   13778 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:04:50.66469354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:04:50.739075   13778 out.go:177] * Using the docker driver based on existing profile
	I0602 11:04:50.759620   13778 start.go:284] selected driver: docker
	I0602 11:04:50.759645   13778 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:04:50.759795   13778 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:04:50.763139   13778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:04:50.890034   13778 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:04:50.837983116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:04:50.890213   13778 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:04:50.890234   13778 cni.go:95] Creating CNI manager for ""
	I0602 11:04:50.890242   13778 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:04:50.890251   13778 start_flags.go:306] config:
	{Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:04:50.932780   13778 out.go:177] * Starting control plane node old-k8s-version-20220602105906-2113 in cluster old-k8s-version-20220602105906-2113
	I0602 11:04:50.953659   13778 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:04:50.974798   13778 out.go:177] * Pulling base image ...
	I0602 11:04:51.016700   13778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 11:04:51.016726   13778 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:04:51.016784   13778 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 11:04:51.016813   13778 cache.go:57] Caching tarball of preloaded images
	I0602 11:04:51.016994   13778 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:04:51.017034   13778 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0602 11:04:51.017938   13778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/config.json ...
	I0602 11:04:51.082281   13778 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:04:51.082299   13778 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:04:51.082308   13778 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:04:51.082351   13778 start.go:352] acquiring machines lock for old-k8s-version-20220602105906-2113: {Name:mk7f6a3ed7e2845a9fdc2d9a313dfa02067477c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:04:51.082434   13778 start.go:356] acquired machines lock for "old-k8s-version-20220602105906-2113" in 59.982µs
	I0602 11:04:51.082454   13778 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:04:51.082463   13778 fix.go:55] fixHost starting: 
	I0602 11:04:51.082690   13778 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Status}}
	I0602 11:04:51.150104   13778 fix.go:103] recreateIfNeeded on old-k8s-version-20220602105906-2113: state=Stopped err=<nil>
	W0602 11:04:51.150141   13778 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:04:51.171923   13778 out.go:177] * Restarting existing docker container for "old-k8s-version-20220602105906-2113" ...
	I0602 11:04:48.488292   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:50.518933   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:52.988519   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:51.192766   13778 cli_runner.go:164] Run: docker start old-k8s-version-20220602105906-2113
	I0602 11:04:51.562681   13778 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220602105906-2113 --format={{.State.Status}}
	I0602 11:04:51.662883   13778 kic.go:416] container "old-k8s-version-20220602105906-2113" state is running.
	I0602 11:04:51.663452   13778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 11:04:51.737105   13778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/config.json ...
	I0602 11:04:51.737509   13778 machine.go:88] provisioning docker machine ...
	I0602 11:04:51.737549   13778 ubuntu.go:169] provisioning hostname "old-k8s-version-20220602105906-2113"
	I0602 11:04:51.737658   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:51.809463   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:51.809681   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:51.809694   13778 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220602105906-2113 && echo "old-k8s-version-20220602105906-2113" | sudo tee /etc/hostname
	I0602 11:04:51.932527   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220602105906-2113
	
	I0602 11:04:51.932606   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.004974   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.005104   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.005119   13778 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220602105906-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220602105906-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220602105906-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:04:52.121395   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:04:52.121423   13778 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:04:52.121455   13778 ubuntu.go:177] setting up certificates
	I0602 11:04:52.121472   13778 provision.go:83] configureAuth start
	I0602 11:04:52.121550   13778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 11:04:52.192336   13778 provision.go:138] copyHostCerts
	I0602 11:04:52.192420   13778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:04:52.192429   13778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:04:52.192520   13778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:04:52.192739   13778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:04:52.192752   13778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:04:52.192807   13778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:04:52.192939   13778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:04:52.192945   13778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:04:52.192998   13778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:04:52.193133   13778 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220602105906-2113 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220602105906-2113]
	I0602 11:04:52.320731   13778 provision.go:172] copyRemoteCerts
	I0602 11:04:52.320787   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:04:52.320827   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.392403   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:52.478826   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0602 11:04:52.497596   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:04:52.514656   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:04:52.533451   13778 provision.go:86] duration metric: configureAuth took 411.958536ms
	I0602 11:04:52.533463   13778 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:04:52.533626   13778 config.go:178] Loaded profile config "old-k8s-version-20220602105906-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0602 11:04:52.533686   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.603829   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.604076   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.604123   13778 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:04:52.720513   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:04:52.720529   13778 ubuntu.go:71] root file system type: overlay
	I0602 11:04:52.720687   13778 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:04:52.720759   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.791816   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.791987   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.792042   13778 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:04:52.916537   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:04:52.916616   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:52.986921   13778 main.go:134] libmachine: Using SSH client type: native
	I0602 11:04:52.987077   13778 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52182 <nil> <nil>}
	I0602 11:04:52.987090   13778 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:04:53.105706   13778 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:04:53.105744   13778 machine.go:91] provisioned docker machine in 1.368201682s
	I0602 11:04:53.105753   13778 start.go:306] post-start starting for "old-k8s-version-20220602105906-2113" (driver="docker")
	I0602 11:04:53.105759   13778 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:04:53.105828   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:04:53.105878   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.176368   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.262898   13778 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:04:53.266671   13778 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:04:53.266685   13778 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:04:53.266692   13778 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:04:53.266697   13778 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:04:53.266705   13778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:04:53.266812   13778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:04:53.266949   13778 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:04:53.267114   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:04:53.274148   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:04:53.291722   13778 start.go:309] post-start completed in 185.950644ms
	I0602 11:04:53.291805   13778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:04:53.291855   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.362608   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.445871   13778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:04:53.450173   13778 fix.go:57] fixHost completed within 2.367659825s
	I0602 11:04:53.450189   13778 start.go:81] releasing machines lock for "old-k8s-version-20220602105906-2113", held for 2.367704829s
	I0602 11:04:53.450271   13778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220602105906-2113
	I0602 11:04:53.521262   13778 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:04:53.521302   13778 ssh_runner.go:195] Run: systemctl --version
	I0602 11:04:53.521350   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.521351   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:53.597060   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.598923   13778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52182 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/old-k8s-version-20220602105906-2113/id_rsa Username:docker}
	I0602 11:04:53.810081   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:04:53.822458   13778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:04:53.832182   13778 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:04:53.832234   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:04:53.841612   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:04:53.854258   13778 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:04:53.920122   13778 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:04:53.988970   13778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:04:53.999075   13778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:04:54.067007   13778 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:04:54.076634   13778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:04:54.111937   13778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:04:54.188721   13778 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0602 11:04:54.188859   13778 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220602105906-2113 dig +short host.docker.internal
	I0602 11:04:54.320880   13778 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:04:54.320996   13778 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:04:54.325104   13778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:04:54.334818   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:54.405836   13778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 11:04:54.405911   13778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:04:54.436193   13778 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 11:04:54.436207   13778 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:04:54.436280   13778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:04:54.467205   13778 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0602 11:04:54.467227   13778 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:04:54.467299   13778 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:04:54.542038   13778 cni.go:95] Creating CNI manager for ""
	I0602 11:04:54.542049   13778 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:04:54.542067   13778 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:04:54.542080   13778 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220602105906-2113 NodeName:old-k8s-version-20220602105906-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:04:54.542186   13778 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220602105906-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220602105906-2113
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:04:54.542264   13778 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220602105906-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 11:04:54.542338   13778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0602 11:04:54.550328   13778 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:04:54.550378   13778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:04:54.557754   13778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0602 11:04:54.570217   13778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:04:54.583212   13778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0602 11:04:54.595430   13778 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:04:54.598973   13778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:04:54.608290   13778 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113 for IP: 192.168.49.2
	I0602 11:04:54.608396   13778 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:04:54.608444   13778 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:04:54.608525   13778 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/client.key
	I0602 11:04:54.608588   13778 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key.dd3b5fb2
	I0602 11:04:54.608636   13778 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key
	I0602 11:04:54.608843   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:04:54.608888   13778 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:04:54.608900   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:04:54.608937   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:04:54.608966   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:04:54.608997   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:04:54.609062   13778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:04:54.609636   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:04:54.626606   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 11:04:54.643214   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:04:54.660634   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602105906-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:04:54.678739   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:04:54.701311   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:04:54.718932   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:04:54.736064   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:04:54.752603   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:04:54.771409   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:04:54.788319   13778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:04:54.805672   13778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:04:54.819496   13778 ssh_runner.go:195] Run: openssl version
	I0602 11:04:54.825123   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:04:54.832756   13778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:04:54.836487   13778 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:04:54.836529   13778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:04:54.841628   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:04:54.848799   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:04:54.856314   13778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:04:54.860364   13778 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:04:54.860406   13778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:04:54.865383   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:04:54.873566   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:04:54.881515   13778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:04:54.885348   13778 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:04:54.885384   13778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:04:54.890326   13778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:04:54.897388   13778 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220602105906-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220602105906-2113 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:04:54.897507   13778 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:04:54.926275   13778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:04:54.933771   13778 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:04:54.933784   13778 kubeadm.go:626] restartCluster start
	I0602 11:04:54.933827   13778 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:04:54.941071   13778 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:54.941133   13778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220602105906-2113
	I0602 11:04:55.012069   13778 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220602105906-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:04:55.012243   13778 kubeconfig.go:127] "old-k8s-version-20220602105906-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:04:55.012551   13778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:04:55.013835   13778 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:04:55.021171   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.021223   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.029814   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:54.990253   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:57.490672   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:04:55.230022   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.239655   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.250586   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.430691   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.430839   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.443438   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.629918   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.630056   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.642922   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:55.830069   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:55.830146   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:55.839562   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.029929   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.030041   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.040636   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.230080   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.230187   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.240520   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.430805   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.430932   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.442009   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.630654   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.630783   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.641383   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:56.832024   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:56.832186   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:56.843733   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.030158   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.030295   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.040942   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.230556   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.230665   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.240962   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.430085   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.430185   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.440845   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.632018   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.632152   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.642712   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:57.832058   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:57.832177   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:57.842760   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:58.031624   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:58.031750   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:58.041861   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:58.041871   13778 api_server.go:165] Checking apiserver status ...
	I0602 11:04:58.041914   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:04:58.050439   13778 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:04:58.050451   13778 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:04:58.050460   13778 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:04:58.050517   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:04:58.078781   13778 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:04:58.088953   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:04:58.096401   13778 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5743 Jun  2 18:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5779 Jun  2 18:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5923 Jun  2 18:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jun  2 18:01 /etc/kubernetes/scheduler.conf
	
	I0602 11:04:58.096451   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 11:04:58.104096   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 11:04:58.111337   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 11:04:58.118781   13778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 11:04:58.125918   13778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:04:58.133559   13778 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:04:58.133572   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:58.183775   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:58.896537   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:59.102587   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:59.155939   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:04:59.209147   13778 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:04:59.209209   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:04:59.720023   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:04:59.491633   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:05:01.991937   13525 pod_ready.go:102] pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace has status "Ready":"False"
	I0602 11:05:02.483619   13525 pod_ready.go:81] duration metric: took 4m0.00409725s waiting for pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace to be "Ready" ...
	E0602 11:05:02.483644   13525 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-gtr88" in "kube-system" namespace to be "Ready" (will not retry!)
	I0602 11:05:02.483698   13525 pod_ready.go:38] duration metric: took 4m15.053210658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:05:02.483739   13525 kubeadm.go:630] restartCluster took 4m24.531157256s
	W0602 11:05:02.483854   13525 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0602 11:05:02.483882   13525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:05:00.218628   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:00.717988   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:01.217869   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:01.720091   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:02.218009   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:02.720026   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:03.218005   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:03.719549   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:04.218269   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:04.720072   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:05.218193   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:05.719036   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:06.218362   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:06.718089   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:07.218191   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:07.720187   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:08.218889   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:08.720174   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:09.218254   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:09.718308   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:10.218927   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:10.718179   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:11.218634   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:11.720188   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:12.218325   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:12.718650   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:13.219209   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:13.718177   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:14.220264   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:14.720245   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:15.218876   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:15.718362   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:16.218196   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:16.720267   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:17.218738   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:17.720295   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:18.219639   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:18.719893   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:19.220307   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:19.718573   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:20.218881   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:20.718810   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:21.218435   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:21.720434   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:22.218420   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:22.720437   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:23.218341   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:23.718492   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:24.219768   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:24.718807   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:25.218974   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:25.720400   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:26.218669   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:26.720487   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:27.220515   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:27.720403   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:28.218730   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:28.718903   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:29.218531   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:29.720002   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:30.219067   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:30.720510   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:31.219729   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:31.720605   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:32.218956   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:32.720577   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:33.218952   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:33.720422   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:34.219560   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:34.718658   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:35.219547   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:35.720592   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:36.219099   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:36.719579   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:37.220649   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:37.718593   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:38.219903   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:38.719838   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:39.219406   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:39.718563   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:40.811222   13525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.326662804s)
	I0602 11:05:40.811281   13525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:05:40.821130   13525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:05:40.829186   13525 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:05:40.829233   13525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:05:40.836939   13525 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:05:40.836966   13525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:05:41.324890   13525 out.go:204]   - Generating certificates and keys ...
	I0602 11:05:42.371876   13525 out.go:204]   - Booting up control plane ...
	I0602 11:05:40.218840   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:40.718801   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:41.218646   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:41.720566   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:42.220521   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:42.718687   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:43.218743   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:43.719443   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:44.218763   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:44.718717   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:45.219727   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:45.719434   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:46.218669   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:46.719292   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:47.218839   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:47.720682   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:48.219900   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:48.718703   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:49.218731   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:49.718948   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:48.417839   13525 out.go:204]   - Configuring RBAC rules ...
	I0602 11:05:48.793207   13525 cni.go:95] Creating CNI manager for ""
	I0602 11:05:48.793218   13525 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:05:48.793246   13525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 11:05:48.793326   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=no-preload-20220602105919-2113 minikube.k8s.io/updated_at=2022_06_02T11_05_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:48.793330   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:48.802337   13525 ops.go:34] apiserver oom_adj: -16
	I0602 11:05:48.982807   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:49.547906   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:50.046418   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:50.547906   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:51.047017   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:51.546082   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:52.046799   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:52.545914   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:53.047211   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:50.219516   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:50.718836   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:51.218950   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:51.719045   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:52.220332   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:52.719306   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:53.219458   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:53.719131   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:54.219966   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:54.718927   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:53.546077   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:54.047664   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:54.545968   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:55.047870   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:55.546564   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:56.046044   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:56.546214   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:57.047968   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:57.546059   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:58.048016   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:55.219031   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:55.718981   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:56.220088   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:56.718966   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:57.219844   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:57.718981   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:58.221005   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:58.719195   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:05:59.220136   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:05:59.250806   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.250818   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:05:59.250893   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:05:59.280792   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.280803   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:05:59.280863   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:05:59.308900   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.308911   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:05:59.308972   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:05:59.337622   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.337634   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:05:59.337694   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:05:59.368293   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.368306   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:05:59.368364   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:05:59.396426   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.396439   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:05:59.396499   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:05:59.425726   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.425739   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:05:59.425795   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:05:59.454519   13778 logs.go:274] 0 containers: []
	W0602 11:05:59.454531   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:05:59.454538   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:05:59.454547   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:05:59.466217   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:05:59.466232   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:05:59.517449   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:05:59.517462   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:05:59.517469   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:05:59.530200   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:05:59.530214   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:05:58.547423   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:59.046038   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:05:59.546458   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:00.046043   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:00.546396   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:01.046495   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:01.548087   13525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:06:01.621971   13525 kubeadm.go:1045] duration metric: took 12.828480532s to wait for elevateKubeSystemPrivileges.
	I0602 11:06:01.621988   13525 kubeadm.go:397] StartCluster complete in 5m23.705021941s
	I0602 11:06:01.622011   13525 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:06:01.622099   13525 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:06:01.622731   13525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:06:02.138987   13525 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220602105919-2113" rescaled to 1
	I0602 11:06:02.139031   13525 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 11:06:02.139040   13525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 11:06:02.139087   13525 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 11:06:02.161332   13525 out.go:177] * Verifying Kubernetes components...
	I0602 11:06:02.139277   13525 config.go:178] Loaded profile config "no-preload-20220602105919-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:06:02.161401   13525 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220602105919-2113"
	I0602 11:06:02.161401   13525 addons.go:65] Setting dashboard=true in profile "no-preload-20220602105919-2113"
	I0602 11:06:02.161402   13525 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220602105919-2113"
	I0602 11:06:02.161434   13525 addons.go:65] Setting metrics-server=true in profile "no-preload-20220602105919-2113"
	I0602 11:06:02.234564   13525 addons.go:153] Setting addon metrics-server=true in "no-preload-20220602105919-2113"
	I0602 11:06:02.234576   13525 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220602105919-2113"
	W0602 11:06:02.234584   13525 addons.go:165] addon metrics-server should already be in state true
	I0602 11:06:02.234586   13525 addons.go:153] Setting addon dashboard=true in "no-preload-20220602105919-2113"
	I0602 11:06:02.234620   13525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220602105919-2113"
	W0602 11:06:02.234632   13525 addons.go:165] addon dashboard should already be in state true
	I0602 11:06:02.234641   13525 host.go:66] Checking if "no-preload-20220602105919-2113" exists ...
	W0602 11:06:02.234595   13525 addons.go:165] addon storage-provisioner should already be in state true
	I0602 11:06:02.234674   13525 host.go:66] Checking if "no-preload-20220602105919-2113" exists ...
	I0602 11:06:02.234676   13525 host.go:66] Checking if "no-preload-20220602105919-2113" exists ...
	I0602 11:06:02.234601   13525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:06:02.235069   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.235135   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.235161   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.235952   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.243739   13525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 11:06:02.255736   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.387806   13525 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 11:06:02.348441   13525 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220602105919-2113"
	I0602 11:06:02.366429   13525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0602 11:06:02.387854   13525 addons.go:165] addon default-storageclass should already be in state true
	I0602 11:06:02.450797   13525 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 11:06:02.409034   13525 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:06:02.429992   13525 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 11:06:02.429998   13525 host.go:66] Checking if "no-preload-20220602105919-2113" exists ...
	I0602 11:06:02.442672   13525 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220602105919-2113" to be "Ready" ...
	I0602 11:06:02.471926   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 11:06:02.471926   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 11:06:02.492894   13525 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0602 11:06:02.472009   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.472018   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.472515   13525 cli_runner.go:164] Run: docker container inspect no-preload-20220602105919-2113 --format={{.State.Status}}
	I0602 11:06:02.476345   13525 node_ready.go:49] node "no-preload-20220602105919-2113" has status "Ready":"True"
	I0602 11:06:02.513957   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 11:06:02.513974   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 11:06:02.513959   13525 node_ready.go:38] duration metric: took 42.017221ms waiting for node "no-preload-20220602105919-2113" to be "Ready" ...
	I0602 11:06:02.513998   13525 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:06:02.514072   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.521948   13525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-6m889" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:02.575900   13525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51942 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/no-preload-20220602105919-2113/id_rsa Username:docker}
	I0602 11:06:02.609147   13525 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 11:06:02.609164   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 11:06:02.609242   13525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220602105919-2113
	I0602 11:06:02.611033   13525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51942 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/no-preload-20220602105919-2113/id_rsa Username:docker}
	I0602 11:06:02.612643   13525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51942 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/no-preload-20220602105919-2113/id_rsa Username:docker}
	I0602 11:06:02.685895   13525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51942 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/no-preload-20220602105919-2113/id_rsa Username:docker}
	I0602 11:06:02.695658   13525 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 11:06:02.695670   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 11:06:02.714463   13525 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 11:06:02.714475   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 11:06:02.779112   13525 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:06:02.779128   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 11:06:02.785059   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 11:06:02.785076   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 11:06:02.794962   13525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:06:02.800989   13525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:06:02.807170   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 11:06:02.807188   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 11:06:02.893079   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 11:06:02.893100   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 11:06:02.974206   13525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 11:06:03.086155   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 11:06:03.086169   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 11:06:03.194757   13525 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 11:06:03.210171   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 11:06:03.210183   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 11:06:03.375843   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 11:06:03.375859   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 11:06:03.572646   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 11:06:03.572663   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 11:06:03.609224   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 11:06:03.609244   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 11:06:03.680445   13525 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220602105919-2113"
	I0602 11:06:03.689077   13525 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:06:03.689093   13525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 11:06:03.772979   13525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:06:04.535326   13525 pod_ready.go:102] pod "coredns-64897985d-6m889" in "kube-system" namespace has status "Ready":"False"
	I0602 11:06:04.785766   13525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.012719524s)
	I0602 11:06:04.809002   13525 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0602 11:06:01.585281   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055020437s)
	I0602 11:06:01.585394   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:01.585402   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:04.133605   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:04.221004   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:04.251619   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.251631   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:04.251691   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:04.292078   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.292092   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:04.292154   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:04.339824   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.339842   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:04.339915   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:04.377243   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.377271   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:04.377353   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:04.408245   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.408257   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:04.408326   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:04.441761   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.441772   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:04.441834   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:04.471465   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.471482   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:04.471551   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:04.507089   13778 logs.go:274] 0 containers: []
	W0602 11:06:04.507101   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:04.507107   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:04.507115   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:04.522059   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:04.522082   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:04.592918   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:04.592943   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:04.592954   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:04.609191   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:04.609209   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:04.882616   13525 addons.go:417] enableAddons completed in 2.743486245s
	I0602 11:06:07.035944   13525 pod_ready.go:102] pod "coredns-64897985d-6m889" in "kube-system" namespace has status "Ready":"False"
	I0602 11:06:08.567745   13525 pod_ready.go:92] pod "coredns-64897985d-6m889" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.567758   13525 pod_ready.go:81] duration metric: took 6.045686452s waiting for pod "coredns-64897985d-6m889" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.567764   13525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-vnxnm" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.572156   13525 pod_ready.go:92] pod "coredns-64897985d-vnxnm" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.572165   13525 pod_ready.go:81] duration metric: took 4.396943ms waiting for pod "coredns-64897985d-vnxnm" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.572172   13525 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.580106   13525 pod_ready.go:92] pod "etcd-no-preload-20220602105919-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.580116   13525 pod_ready.go:81] duration metric: took 7.939202ms waiting for pod "etcd-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.580124   13525 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.585011   13525 pod_ready.go:92] pod "kube-apiserver-no-preload-20220602105919-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.585021   13525 pod_ready.go:81] duration metric: took 4.892259ms waiting for pod "kube-apiserver-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.585027   13525 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.589734   13525 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220602105919-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.589743   13525 pod_ready.go:81] duration metric: took 4.7114ms waiting for pod "kube-controller-manager-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.589749   13525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cjctl" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.933111   13525 pod_ready.go:92] pod "kube-proxy-cjctl" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:08.933120   13525 pod_ready.go:81] duration metric: took 343.36086ms waiting for pod "kube-proxy-cjctl" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:08.933126   13525 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:09.333186   13525 pod_ready.go:92] pod "kube-scheduler-no-preload-20220602105919-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:06:09.333196   13525 pod_ready.go:81] duration metric: took 400.05867ms waiting for pod "kube-scheduler-no-preload-20220602105919-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:06:09.333203   13525 pod_ready.go:38] duration metric: took 6.819069303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:06:09.333216   13525 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:06:09.333264   13525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:09.372834   13525 api_server.go:71] duration metric: took 7.233658751s to wait for apiserver process to appear ...
	I0602 11:06:09.372853   13525 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:06:09.372862   13525 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:51941/healthz ...
	I0602 11:06:09.378487   13525 api_server.go:266] https://127.0.0.1:51941/healthz returned 200:
	ok
	I0602 11:06:09.379772   13525 api_server.go:140] control plane version: v1.23.6
	I0602 11:06:09.379782   13525 api_server.go:130] duration metric: took 6.923741ms to wait for apiserver health ...
	I0602 11:06:09.379786   13525 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:06:09.540922   13525 system_pods.go:59] 9 kube-system pods found
	I0602 11:06:09.540950   13525 system_pods.go:61] "coredns-64897985d-6m889" [1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4] Running
	I0602 11:06:09.540958   13525 system_pods.go:61] "coredns-64897985d-vnxnm" [87f4263e-9841-4bc8-9d4b-b54296061d0e] Running
	I0602 11:06:09.540965   13525 system_pods.go:61] "etcd-no-preload-20220602105919-2113" [c842274b-e6bc-4c21-892a-f22388d2fb25] Running
	I0602 11:06:09.540973   13525 system_pods.go:61] "kube-apiserver-no-preload-20220602105919-2113" [1b8f4bdd-c9af-47a9-a60a-fd91abef1b9d] Running
	I0602 11:06:09.540986   13525 system_pods.go:61] "kube-controller-manager-no-preload-20220602105919-2113" [36d6cfb1-3dd4-485b-9962-225a493ddb0a] Running
	I0602 11:06:09.540992   13525 system_pods.go:61] "kube-proxy-cjctl" [79eff5cb-2888-4f02-8072-f0b91b7ae18a] Running
	I0602 11:06:09.540997   13525 system_pods.go:61] "kube-scheduler-no-preload-20220602105919-2113" [b928d458-fdaa-4c7f-9440-45548309d5f6] Running
	I0602 11:06:09.541007   13525 system_pods.go:61] "metrics-server-b955d9d8-mt94g" [3ff97994-84e1-48cd-9935-128402ff47c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:06:09.541017   13525 system_pods.go:61] "storage-provisioner" [45aca05a-d370-433b-a31d-c5af9b987ae1] Running
	I0602 11:06:09.541021   13525 system_pods.go:74] duration metric: took 161.227922ms to wait for pod list to return data ...
	I0602 11:06:09.541025   13525 default_sa.go:34] waiting for default service account to be created ...
	I0602 11:06:09.733006   13525 default_sa.go:45] found service account: "default"
	I0602 11:06:09.733018   13525 default_sa.go:55] duration metric: took 191.98523ms for default service account to be created ...
	I0602 11:06:09.733024   13525 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 11:06:09.956582   13525 system_pods.go:86] 9 kube-system pods found
	I0602 11:06:09.956595   13525 system_pods.go:89] "coredns-64897985d-6m889" [1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4] Running
	I0602 11:06:09.956600   13525 system_pods.go:89] "coredns-64897985d-vnxnm" [87f4263e-9841-4bc8-9d4b-b54296061d0e] Running
	I0602 11:06:09.956603   13525 system_pods.go:89] "etcd-no-preload-20220602105919-2113" [c842274b-e6bc-4c21-892a-f22388d2fb25] Running
	I0602 11:06:09.956613   13525 system_pods.go:89] "kube-apiserver-no-preload-20220602105919-2113" [1b8f4bdd-c9af-47a9-a60a-fd91abef1b9d] Running
	I0602 11:06:09.956617   13525 system_pods.go:89] "kube-controller-manager-no-preload-20220602105919-2113" [36d6cfb1-3dd4-485b-9962-225a493ddb0a] Running
	I0602 11:06:09.956624   13525 system_pods.go:89] "kube-proxy-cjctl" [79eff5cb-2888-4f02-8072-f0b91b7ae18a] Running
	I0602 11:06:09.956628   13525 system_pods.go:89] "kube-scheduler-no-preload-20220602105919-2113" [b928d458-fdaa-4c7f-9440-45548309d5f6] Running
	I0602 11:06:09.956635   13525 system_pods.go:89] "metrics-server-b955d9d8-mt94g" [3ff97994-84e1-48cd-9935-128402ff47c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:06:09.956641   13525 system_pods.go:89] "storage-provisioner" [45aca05a-d370-433b-a31d-c5af9b987ae1] Running
	I0602 11:06:09.956645   13525 system_pods.go:126] duration metric: took 223.614368ms to wait for k8s-apps to be running ...
	I0602 11:06:09.956651   13525 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 11:06:09.956697   13525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:06:09.969456   13525 system_svc.go:56] duration metric: took 12.798458ms WaitForService to wait for kubelet.
	I0602 11:06:09.969478   13525 kubeadm.go:572] duration metric: took 7.830295092s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 11:06:09.969495   13525 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:06:10.134370   13525 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:06:10.134383   13525 node_conditions.go:123] node cpu capacity is 6
	I0602 11:06:10.134390   13525 node_conditions.go:105] duration metric: took 164.887939ms to run NodePressure ...
	I0602 11:06:10.134397   13525 start.go:213] waiting for startup goroutines ...
	I0602 11:06:10.164528   13525 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 11:06:10.196610   13525 out.go:177] * Done! kubectl is now configured to use "no-preload-20220602105919-2113" cluster and "default" namespace by default
	I0602 11:06:06.668244   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058988448s)
	I0602 11:06:06.668353   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:06.668361   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:09.209694   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:09.719071   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:09.751868   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.751881   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:09.751941   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:09.782377   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.782387   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:09.782461   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:09.812852   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.812866   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:09.812927   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:09.841271   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.841287   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:09.841355   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:09.869322   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.869337   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:09.869404   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:09.904831   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.904845   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:09.904924   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:09.935441   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.935452   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:09.935513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:09.971502   13778 logs.go:274] 0 containers: []
	W0602 11:06:09.971513   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:09.971520   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:09.971526   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:09.984595   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:09.984608   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:12.040057   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055401538s)
	I0602 11:06:12.040168   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:12.040175   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:12.084908   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:12.084928   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:12.099657   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:12.099674   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:12.176399   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:14.677774   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:14.719277   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:14.749284   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.749296   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:14.749352   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:14.779602   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.779617   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:14.779692   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:14.810304   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.810315   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:14.810375   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:14.840825   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.840837   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:14.840895   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:14.871176   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.871189   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:14.871245   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:14.899620   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.899632   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:14.899690   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:14.928084   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.928098   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:14.928152   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:14.958074   13778 logs.go:274] 0 containers: []
	W0602 11:06:14.958086   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:14.958093   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:14.958100   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:14.998133   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:14.998148   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:15.010030   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:15.010044   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:15.062993   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:15.063012   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:15.063020   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:15.074991   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:15.075002   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:17.150624   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.075573411s)
	I0602 11:06:19.651352   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:19.721185   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:19.753754   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.753767   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:19.753824   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:19.785309   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.785320   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:19.785375   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:19.815519   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.815532   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:19.815592   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:19.844388   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.844403   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:19.844460   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:19.874394   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.874405   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:19.874463   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:19.903563   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.903575   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:19.903636   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:19.932385   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.932397   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:19.932455   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:19.961585   13778 logs.go:274] 0 containers: []
	W0602 11:06:19.961597   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:19.961604   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:19.961611   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:20.002244   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:20.002257   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:20.014432   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:20.014446   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:20.076253   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:20.076266   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:20.076274   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:20.088518   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:20.088530   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:22.145216   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056638404s)
	I0602 11:06:24.646167   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:24.719768   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:24.751355   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.751366   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:24.751429   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:24.782962   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.782973   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:24.783035   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:24.813990   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.814003   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:24.814058   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:24.848961   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.848974   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:24.849032   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:24.878730   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.878742   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:24.878798   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:24.906982   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.906994   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:24.907050   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:24.938955   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.938968   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:24.939036   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:24.970095   13778 logs.go:274] 0 containers: []
	W0602 11:06:24.970109   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:24.970122   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:24.970131   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:25.015415   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:25.015429   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:25.027601   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:25.027615   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:25.079664   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:25.079676   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:25.079685   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:25.091626   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:25.091642   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:27.149516   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057826153s)
	I0602 11:06:29.650792   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:29.721431   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:29.752590   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.752602   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:29.752682   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:29.781730   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.781745   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:29.781812   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:29.811830   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.811842   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:29.811899   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:29.844830   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.844842   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:29.844906   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:29.874059   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.874074   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:29.874138   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:29.903122   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.903134   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:29.903203   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:29.931909   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.931920   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:29.931981   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:29.959768   13778 logs.go:274] 0 containers: []
	W0602 11:06:29.959780   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:29.959787   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:29.959793   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:29.971640   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:29.971654   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:32.025610   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053903096s)
	I0602 11:06:32.025734   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:32.025742   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:32.066635   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:32.066655   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:32.078867   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:32.078880   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:32.133725   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:34.634284   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:34.721701   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:34.751984   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.751995   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:34.752050   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:34.779859   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.779872   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:34.779929   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:34.809891   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.809902   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:34.809967   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:34.838099   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.838111   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:34.838170   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:34.866657   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.866673   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:34.866736   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:34.895965   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.895980   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:34.896037   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:34.924358   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.924371   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:34.924427   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:34.954617   13778 logs.go:274] 0 containers: []
	W0602 11:06:34.954628   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:34.954635   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:34.954646   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:34.992693   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:34.992705   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:35.005024   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:35.005041   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:35.061106   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:35.061116   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:35.061122   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:35.073095   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:35.073107   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:37.128746   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055582995s)
	I0602 11:06:39.629638   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:39.719744   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:39.751161   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.751172   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:39.751233   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:39.780249   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.780261   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:39.780319   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:39.809191   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.809204   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:39.809259   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:39.837277   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.837288   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:39.837354   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:39.865911   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.865922   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:39.865977   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:39.894428   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.894440   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:39.894508   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:39.923609   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.923621   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:39.923681   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:39.952594   13778 logs.go:274] 0 containers: []
	W0602 11:06:39.952606   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:39.952613   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:39.952631   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:42.012619   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059940213s)
	I0602 11:06:42.012752   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:42.012763   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:42.051824   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:42.051860   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:42.064028   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:42.064044   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:42.116407   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:42.116419   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:42.116429   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:44.630691   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:44.720202   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:44.753527   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.753540   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:44.753594   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:44.783807   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.783820   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:44.783877   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:44.815087   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.815101   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:44.815157   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:44.855143   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.855157   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:44.855211   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:44.884114   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.884126   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:44.884184   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:44.912516   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.912529   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:44.912586   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:44.942078   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.942090   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:44.942144   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:44.973360   13778 logs.go:274] 0 containers: []
	W0602 11:06:44.973371   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:44.973378   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:44.973384   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:45.013557   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:45.013572   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:45.024888   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:45.024900   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:45.077791   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:45.077807   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:45.077815   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:45.089614   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:45.089626   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:47.143631   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053956953s)
	I0602 11:06:49.645446   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:49.720524   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:49.751916   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.751928   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:49.751985   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:49.781581   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.781593   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:49.781650   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:49.811063   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.811076   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:49.811131   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:49.839799   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.839812   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:49.839870   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:49.868670   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.868683   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:49.868741   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:49.897111   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.897125   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:49.897187   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:49.926696   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.926708   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:49.926765   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:49.955084   13778 logs.go:274] 0 containers: []
	W0602 11:06:49.955097   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:49.955103   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:49.955110   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:50.010000   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:50.010012   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:50.010021   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:50.022044   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:50.022057   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:52.079829   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057724742s)
	I0602 11:06:52.079935   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:52.079942   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:52.119564   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:52.119577   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:54.633352   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:54.721975   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:54.753327   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.753339   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:54.753394   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:54.782146   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.782158   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:54.782214   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:54.810970   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.810983   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:54.811029   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:54.842645   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.842665   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:54.842725   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:54.871490   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.871502   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:54.871556   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:54.900472   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.900483   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:54.900541   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:54.929112   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.929124   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:54.929182   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:54.958837   13778 logs.go:274] 0 containers: []
	W0602 11:06:54.958849   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:54.958857   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:06:54.958866   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:06:54.998335   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:06:54.998348   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:06:55.009734   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:06:55.009746   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:06:55.062791   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:06:55.062801   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:55.062808   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:55.074548   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:55.074559   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:06:57.132240   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057634309s)
	I0602 11:06:59.633858   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:06:59.720436   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:06:59.752920   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.752935   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:06:59.752993   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:06:59.784345   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.784360   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:06:59.784424   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:06:59.814781   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.814794   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:06:59.814853   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:06:59.850880   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.850892   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:06:59.850948   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:06:59.880523   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.880539   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:06:59.880600   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:06:59.910968   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.910980   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:06:59.911060   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:06:59.946727   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.946740   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:06:59.946803   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:06:59.981179   13778 logs.go:274] 0 containers: []
	W0602 11:06:59.981189   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:06:59.981196   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:06:59.981202   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:06:59.994847   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:06:59.994861   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:00:34 UTC, end at Thu 2022-06-02 18:07:04 UTC. --
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.247251413Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1190fa5e12d71dfba8d50a719bce4231bac81bc59e465d65e7a839b4a4394d5d
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.300308800Z" level=info msg="ignoring event" container=1190fa5e12d71dfba8d50a719bce4231bac81bc59e465d65e7a839b4a4394d5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.420803845Z" level=info msg="ignoring event" container=7b87068dd668db059b1659af95c1ebac44d8cfec1987b392fdada3a2ec5390f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.533424308Z" level=info msg="ignoring event" container=1001748b761c246792ebe69031bd1d8cebf4555fc9152b2cbeb357bff8ff37b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.638099844Z" level=info msg="ignoring event" container=2bd7a2b0c7ef1d440825b5570bf51468e988959801e0dfbfc1acfb127a1638ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.742392954Z" level=info msg="ignoring event" container=bbc63dc9ebd9cc751b9ea1f86ccfccb4cbd79124b15a2ae19ec1167ecfcddb75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.843012042Z" level=info msg="ignoring event" container=08fd7fb9075176794027b1e9a6d0174ba97cc0d1c6c0b5760c1598518837adbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:05:39 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:05:39.958892225Z" level=info msg="ignoring event" container=04d6c920b3b262adc1633d6e1412ed4b2ec7c4f3b821d434d1e12bcda21e2959 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:04 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:04.946363688Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:04 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:04.946404495Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:04 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:04.947611594Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:05 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:05.811788282Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 02 18:06:10 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:10.950613307Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:06:11 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:11.173460161Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:06:13 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:13.835268383Z" level=info msg="ignoring event" container=84cae9ad8db493d57f8f26231f91497ac994ee7e5dda8e1be19f7ece4287f7c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:14 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:14.006788654Z" level=info msg="ignoring event" container=5e1df4966cd471a089ab1c68570784dfda5ea0e6b3c470749bf67c44256a97d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:14 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:14.220479470Z" level=info msg="ignoring event" container=a3d8f61758db16903b1c4bcc316978273404b6db5ee32fcaff23a6fd6eef58d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:15 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:15.240643394Z" level=info msg="ignoring event" container=d327b1348eba62c457259a4fca6b6aaed27896904c18fd2f08d4862fa034693c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:06:18 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:18.003479057Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:18 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:18.003530194Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:06:18 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:06:18.004980323Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:07:01 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:07:01.503468881Z" level=info msg="ignoring event" container=e19b634d815f7008711a196cf3cb60425f8ecf4c75e9a47c08cb37e602cfcd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:07:01 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:07:01.836092069Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:07:01 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:07:01.836134484Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:07:01 no-preload-20220602105919-2113 dockerd[130]: time="2022-06-02T18:07:01.879742173Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	e19b634d815f7       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   e88e8ced257dc
	d876cddda7250       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   54 seconds ago       Running             kubernetes-dashboard        0                   c9774b0435813
	aea3e9eb80c2b       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   244468690c915
	248c6e4cf3927       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   34d7e0e53727b
	f63aafca1604c       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   4118ece26bcaf
	d1811367575d1       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   2a0dc1c83b358
	6c116a3954881       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   12b7326e3f111
	4bae480b79dc1       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   5434de364e1d8
	063f469ac3db3       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   395cfd8f46679
	
	* 
	* ==> coredns [248c6e4cf392] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220602105919-2113
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220602105919-2113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=no-preload-20220602105919-2113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T11_05_48_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 18:05:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220602105919-2113
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 18:06:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 18:06:58 +0000   Thu, 02 Jun 2022 18:05:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 18:06:58 +0000   Thu, 02 Jun 2022 18:05:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 18:06:58 +0000   Thu, 02 Jun 2022 18:05:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 18:06:58 +0000   Thu, 02 Jun 2022 18:06:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    no-preload-20220602105919-2113
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                535efd20-df3b-41c1-a9d6-c3f0fbb7439d
	  Boot ID:                    a475dd08-72ba-4c6d-89c1-75a58adc3783
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-6m889                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     62s
	  kube-system                 etcd-no-preload-20220602105919-2113                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kube-apiserver-no-preload-20220602105919-2113             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-no-preload-20220602105919-2113    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-cjctl                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-no-preload-20220602105919-2113             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 metrics-server-b955d9d8-mt94g                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         61s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-cj9rj                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-mzc2x                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 61s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    82s (x4 over 82s)  kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x4 over 82s)  kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  82s (x4 over 82s)  kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientMemory
	  Normal  Starting                 76s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s                kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                65s                kubelet     Node no-preload-20220602105919-2113 status is now: NodeReady
	  Normal  Starting                 7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  6s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6s                 kubelet     Node no-preload-20220602105919-2113 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [4bae480b79dc] <==
	* {"level":"info","ts":"2022-06-02T18:05:43.631Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-02T18:05:43.630Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:05:43.633Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:05:44.527Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:no-preload-20220602105919-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:05:44.528Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:05:44.529Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:05:44.529Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:05:44.529Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T18:05:44.529Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  18:07:05 up 55 min,  0 users,  load average: 0.42, 0.72, 1.08
	Linux no-preload-20220602105919-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [d1811367575d] <==
	* I0602 18:05:47.320536       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 18:05:47.344390       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 18:05:47.411487       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0602 18:05:47.415446       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0602 18:05:47.416301       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 18:05:47.418923       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 18:05:48.175020       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 18:05:48.624764       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 18:05:48.632739       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0602 18:05:48.640361       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 18:05:48.794805       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 18:06:01.763118       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 18:06:01.863216       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 18:06:03.613945       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.1.35]
	I0602 18:06:03.890710       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	W0602 18:06:04.404143       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:06:04.404214       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:06:04.404220       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0602 18:06:04.713461       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.171.174]
	I0602 18:06:04.780233       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.105.110.138]
	W0602 18:07:04.362390       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:07:04.362463       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:07:04.362470       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [063f469ac3db] <==
	* I0602 18:06:02.042783       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-vnxnm"
	I0602 18:06:03.398757       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 18:06:03.407241       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0602 18:06:03.472778       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0602 18:06:03.483158       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-mt94g"
	I0602 18:06:04.581467       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 18:06:04.587908       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:06:04.593214       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	E0602 18:06:04.593364       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.595339       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.603313       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0602 18:06:04.603942       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.603968       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.607829       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.607882       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.610397       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.610448       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.617943       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.618029       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:06:04.620193       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:06:04.620203       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:06:04.674840       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-cj9rj"
	I0602 18:06:04.679726       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-mzc2x"
	E0602 18:06:57.457623       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 18:06:57.465408       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f63aafca1604] <==
	* I0602 18:06:03.793941       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:06:03.794001       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:06:03.794044       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:06:03.887113       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:06:03.887135       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:06:03.887142       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:06:03.887156       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:06:03.887623       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:06:03.888098       1 config.go:317] "Starting service config controller"
	I0602 18:06:03.888155       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:06:03.888177       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:06:03.888180       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:06:03.989342       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 18:06:03.989362       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6c116a395488] <==
	* W0602 18:05:46.108182       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 18:05:46.108227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 18:05:46.108306       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 18:05:46.108355       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 18:05:46.109168       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:05:46.109213       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:05:46.109417       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 18:05:46.109480       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 18:05:46.109554       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0602 18:05:46.109595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0602 18:05:46.109417       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 18:05:46.109783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 18:05:46.925620       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 18:05:46.925661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 18:05:46.970835       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 18:05:46.970905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0602 18:05:47.011635       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:05:47.011711       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:05:47.074012       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0602 18:05:47.074049       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0602 18:05:47.090067       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 18:05:47.090112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 18:05:47.219201       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 18:05:47.219241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0602 18:05:47.603603       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:00:34 UTC, end at Thu 2022-06-02 18:07:06 UTC. --
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064698    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kvgh\" (UniqueName: \"kubernetes.io/projected/45aca05a-d370-433b-a31d-c5af9b987ae1-kube-api-access-4kvgh\") pod \"storage-provisioner\" (UID: \"45aca05a-d370-433b-a31d-c5af9b987ae1\") " pod="kube-system/storage-provisioner"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064718    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79eff5cb-2888-4f02-8072-f0b91b7ae18a-xtables-lock\") pod \"kube-proxy-cjctl\" (UID: \"79eff5cb-2888-4f02-8072-f0b91b7ae18a\") " pod="kube-system/kube-proxy-cjctl"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064744    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25l5\" (UniqueName: \"kubernetes.io/projected/79eff5cb-2888-4f02-8072-f0b91b7ae18a-kube-api-access-k25l5\") pod \"kube-proxy-cjctl\" (UID: \"79eff5cb-2888-4f02-8072-f0b91b7ae18a\") " pod="kube-system/kube-proxy-cjctl"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064796    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdft\" (UniqueName: \"kubernetes.io/projected/1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4-kube-api-access-jsdft\") pod \"coredns-64897985d-6m889\" (UID: \"1efb4b3b-2c70-4955-ae5a-1ca9c4b97cb4\") " pod="kube-system/coredns-64897985d-6m889"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064818    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/42655025-9d8f-4b9d-9b4f-e57da0c9771b-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-cj9rj\" (UID: \"42655025-9d8f-4b9d-9b4f-e57da0c9771b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-cj9rj"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064834    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6wj\" (UniqueName: \"kubernetes.io/projected/937d38bc-b2d7-4a95-ad97-cb199dfd5ef8-kube-api-access-jw6wj\") pod \"kubernetes-dashboard-cd7c84bfc-mzc2x\" (UID: \"937d38bc-b2d7-4a95-ad97-cb199dfd5ef8\") " pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-mzc2x"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064937    7198 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79eff5cb-2888-4f02-8072-f0b91b7ae18a-lib-modules\") pod \"kube-proxy-cjctl\" (UID: \"79eff5cb-2888-4f02-8072-f0b91b7ae18a\") " pod="kube-system/kube-proxy-cjctl"
	Jun 02 18:06:59 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:06:59.064953    7198 reconciler.go:157] "Reconciler: start to sync state"
	Jun 02 18:07:00 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:00.238186    7198 request.go:665] Waited for 1.191596336s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 02 18:07:00 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:00.336436    7198 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220602105919-2113\" already exists" pod="kube-system/kube-scheduler-no-preload-20220602105919-2113"
	Jun 02 18:07:00 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:00.454280    7198 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220602105919-2113\" already exists" pod="kube-system/etcd-no-preload-20220602105919-2113"
	Jun 02 18:07:00 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:00.720612    7198 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220602105919-2113\" already exists" pod="kube-system/kube-apiserver-no-preload-20220602105919-2113"
	Jun 02 18:07:01 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:01.143684    7198 scope.go:110] "RemoveContainer" containerID="d327b1348eba62c457259a4fca6b6aaed27896904c18fd2f08d4862fa034693c"
	Jun 02 18:07:01 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:01.880590    7198 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 02 18:07:01 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:01.880680    7198 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 02 18:07:01 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:01.880857    7198 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xrd47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHan
dler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-mt94g_kube-system(3ff97994-84e1-48cd-9935-128402ff47c0): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 02 18:07:01 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:01.880901    7198 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-mt94g" podUID=3ff97994-84e1-48cd-9935-128402ff47c0
	Jun 02 18:07:02 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:02.067051    7198 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-cj9rj through plugin: invalid network status for"
	Jun 02 18:07:02 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:02.075381    7198 scope.go:110] "RemoveContainer" containerID="d327b1348eba62c457259a4fca6b6aaed27896904c18fd2f08d4862fa034693c"
	Jun 02 18:07:02 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:02.075617    7198 scope.go:110] "RemoveContainer" containerID="e19b634d815f7008711a196cf3cb60425f8ecf4c75e9a47c08cb37e602cfcd61"
	Jun 02 18:07:02 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:02.075832    7198 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-cj9rj_kubernetes-dashboard(42655025-9d8f-4b9d-9b4f-e57da0c9771b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-cj9rj" podUID=42655025-9d8f-4b9d-9b4f-e57da0c9771b
	Jun 02 18:07:02 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:02.093695    7198 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Jun 02 18:07:03 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:03.081244    7198 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-cj9rj through plugin: invalid network status for"
	Jun 02 18:07:03 no-preload-20220602105919-2113 kubelet[7198]: I0602 18:07:03.083902    7198 scope.go:110] "RemoveContainer" containerID="e19b634d815f7008711a196cf3cb60425f8ecf4c75e9a47c08cb37e602cfcd61"
	Jun 02 18:07:03 no-preload-20220602105919-2113 kubelet[7198]: E0602 18:07:03.084041    7198 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-cj9rj_kubernetes-dashboard(42655025-9d8f-4b9d-9b4f-e57da0c9771b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-cj9rj" podUID=42655025-9d8f-4b9d-9b4f-e57da0c9771b
	
	* 
	* ==> kubernetes-dashboard [d876cddda725] <==
	* 2022/06/02 18:06:10 Using namespace: kubernetes-dashboard
	2022/06/02 18:06:10 Using in-cluster config to connect to apiserver
	2022/06/02 18:06:10 Using secret token for csrf signing
	2022/06/02 18:06:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/02 18:06:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/02 18:06:10 Successful initial request to the apiserver, version: v1.23.6
	2022/06/02 18:06:10 Generating JWE encryption key
	2022/06/02 18:06:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/02 18:06:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/02 18:06:10 Initializing JWE encryption key from synchronized object
	2022/06/02 18:06:10 Creating in-cluster Sidecar client
	2022/06/02 18:06:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:06:10 Serving insecurely on HTTP port: 9090
	2022/06/02 18:06:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:06:10 Starting overwatch
	
	* 
	* ==> storage-provisioner [aea3e9eb80c2] <==
	* I0602 18:06:04.470354       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:06:04.480832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:06:04.480890       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:06:04.487145       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:06:04.487293       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220602105919-2113_b6594a44-c79c-4c71-a29a-ea67307901dd!
	I0602 18:06:04.487919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d815899-92a0-47d9-b0da-6cf8c36f4375", APIVersion:"v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220602105919-2113_b6594a44-c79c-4c71-a29a-ea67307901dd became leader
	I0602 18:06:04.589740       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220602105919-2113_b6594a44-c79c-4c71-a29a-ea67307901dd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220602105919-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-mt94g
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220602105919-2113 describe pod metrics-server-b955d9d8-mt94g
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220602105919-2113 describe pod metrics-server-b955d9d8-mt94g: exit status 1 (285.440486ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-mt94g" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220602105919-2113 describe pod metrics-server-b955d9d8-mt94g: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (43.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0602 11:13:01.386817    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:14:12.696693    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:14:20.065408    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 11:14:23.981955    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:14:29.123185    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:15:11.566650    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:15:47.027283    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 11:15:52.276494    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:16:54.188215    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:17:15.322431    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:17:52.709637    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:17:52.715045    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:17:52.726618    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:17:52.748794    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:17:52.789961    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:17:52.870415    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:17:53.031284    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:17:53.353587    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:17:57.836277    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:18:01.392309    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 11:18:02.956601    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:18:13.197492    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:18:17.245950    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:18:33.678965    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:18:54.122356    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:19:03.545543    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:19:12.701888    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:19:14.641994    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:19:17.898713    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:19:20.068992    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 11:19:23.985126    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:19:29.129475    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:20:11.571935    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:20:17.175784    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:20:26.604774    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:20:36.563756    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:20:52.282284    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:21:54.194093    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:276: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
start_stop_delete_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 2 (459.873646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:276: status error: exit status 2 (may be ok)
start_stop_delete_test.go:276: "old-k8s-version-20220602105906-2113" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220602105906-2113
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220602105906-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07",
	        "Created": "2022-06-02T17:59:12.760386506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:04:51.572935922Z",
	            "FinishedAt": "2022-06-02T18:04:48.684748032Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hostname",
	        "HostsPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hosts",
	        "LogPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07-json.log",
	        "Name": "/old-k8s-version-20220602105906-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220602105906-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220602105906-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220602105906-2113",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220602105906-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220602105906-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77d71d4d8d15408927c38bc69753733fb245f90b6786c7b56828647b3b4389d6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52179"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52181"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/77d71d4d8d15",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220602105906-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61b85e98188b",
	                        "old-k8s-version-20220602105906-2113"
	                    ],
	                    "NetworkID": "fefb74a76593392c8406a972f20a5745c2403bb46ee6809bd1a18584d4cbeee4",
	                    "EndpointID": "3cd2312efe3d60be38aeb6608533eff057e701e91a3e65f1ab1e73ec94a72df1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
E0602 11:22:32.191511    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 2 (436.822438ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220602105906-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220602105906-2113 logs -n 25: (3.489452514s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | old-k8s-version-20220602105906-2113                        | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:12 PDT | 02 Jun 22 11:13 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220602110711-2113             | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220602110711-2113             | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220602111446-2113                             | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220602111446-2113                             | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	| start   | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:17:54
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:17:54.298706   15352 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:17:54.298896   15352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:17:54.298901   15352 out.go:309] Setting ErrFile to fd 2...
	I0602 11:17:54.298905   15352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:17:54.299002   15352 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:17:54.299282   15352 out.go:303] Setting JSON to false
	I0602 11:17:54.314716   15352 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4643,"bootTime":1654189231,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:17:54.314829   15352 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:17:54.336522   15352 out.go:177] * [embed-certs-20220602111648-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:17:54.379858   15352 notify.go:193] Checking for updates...
	I0602 11:17:54.401338   15352 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:17:54.422430   15352 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:17:54.443822   15352 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:17:54.465706   15352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:17:54.487842   15352 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:17:54.510345   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:17:54.511006   15352 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:17:54.583879   15352 docker.go:137] docker version: linux-20.10.14
	I0602 11:17:54.584008   15352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:17:54.710496   15352 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:17:54.661726472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:17:54.732441   15352 out.go:177] * Using the docker driver based on existing profile
	I0602 11:17:54.754261   15352 start.go:284] selected driver: docker
	I0602 11:17:54.754294   15352 start.go:806] validating driver "docker" against &{Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:54.754438   15352 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:17:54.757822   15352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:17:54.886547   15352 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:17:54.836693909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:17:54.886708   15352 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:17:54.886725   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:17:54.886733   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:17:54.886755   15352 start_flags.go:306] config:
	{Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:54.930397   15352 out.go:177] * Starting control plane node embed-certs-20220602111648-2113 in cluster embed-certs-20220602111648-2113
	I0602 11:17:54.952534   15352 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:17:54.974462   15352 out.go:177] * Pulling base image ...
	I0602 11:17:55.016639   15352 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:17:55.016641   15352 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:17:55.016722   15352 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 11:17:55.016736   15352 cache.go:57] Caching tarball of preloaded images
	I0602 11:17:55.016927   15352 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:17:55.016959   15352 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 11:17:55.017969   15352 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/config.json ...
	I0602 11:17:55.082071   15352 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:17:55.082088   15352 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:17:55.082098   15352 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:17:55.082139   15352 start.go:352] acquiring machines lock for embed-certs-20220602111648-2113: {Name:mk14ff68897b305c2bdfb36f1ceaa58ce32379a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:17:55.082233   15352 start.go:356] acquired machines lock for "embed-certs-20220602111648-2113" in 73.195µs
	I0602 11:17:55.082254   15352 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:17:55.082263   15352 fix.go:55] fixHost starting: 
	I0602 11:17:55.082507   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:17:55.149317   15352 fix.go:103] recreateIfNeeded on embed-certs-20220602111648-2113: state=Stopped err=<nil>
	W0602 11:17:55.149352   15352 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:17:55.192959   15352 out.go:177] * Restarting existing docker container for "embed-certs-20220602111648-2113" ...
	I0602 11:17:55.214224   15352 cli_runner.go:164] Run: docker start embed-certs-20220602111648-2113
	I0602 11:17:55.579016   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:17:55.651976   15352 kic.go:416] container "embed-certs-20220602111648-2113" state is running.
	I0602 11:17:55.652516   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:55.726686   15352 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/config.json ...
	I0602 11:17:55.727067   15352 machine.go:88] provisioning docker machine ...
	I0602 11:17:55.727092   15352 ubuntu.go:169] provisioning hostname "embed-certs-20220602111648-2113"
	I0602 11:17:55.727154   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:55.800251   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:55.800475   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:55.800489   15352 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220602111648-2113 && echo "embed-certs-20220602111648-2113" | sudo tee /etc/hostname
	I0602 11:17:55.940753   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220602111648-2113
	
	I0602 11:17:55.940849   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.013703   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.013881   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.013895   15352 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220602111648-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220602111648-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220602111648-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:17:56.130458   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:17:56.130490   15352 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:17:56.130508   15352 ubuntu.go:177] setting up certificates
	I0602 11:17:56.130518   15352 provision.go:83] configureAuth start
	I0602 11:17:56.130590   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:56.202522   15352 provision.go:138] copyHostCerts
	I0602 11:17:56.202610   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:17:56.202620   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:17:56.202707   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:17:56.202956   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:17:56.202966   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:17:56.203024   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:17:56.203210   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:17:56.203230   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:17:56.203292   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:17:56.203402   15352 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220602111648-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220602111648-2113]
	I0602 11:17:56.290352   15352 provision.go:172] copyRemoteCerts
	I0602 11:17:56.290417   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:17:56.290462   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.363098   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:56.448844   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:17:56.468413   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:17:56.487167   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0602 11:17:56.504244   15352 provision.go:86] duration metric: configureAuth took 373.70854ms
	I0602 11:17:56.504257   15352 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:17:56.504400   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:17:56.504454   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.574726   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.574873   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.574883   15352 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:17:56.692552   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:17:56.692565   15352 ubuntu.go:71] root file system type: overlay
	I0602 11:17:56.692719   15352 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:17:56.692794   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.763208   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.763366   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.763424   15352 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:17:56.888442   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:17:56.888522   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.959173   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.959343   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.959378   15352 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:17:57.080070   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:17:57.080081   15352 machine.go:91] provisioned docker machine in 1.352983871s
	I0602 11:17:57.080092   15352 start.go:306] post-start starting for "embed-certs-20220602111648-2113" (driver="docker")
	I0602 11:17:57.080099   15352 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:17:57.080167   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:17:57.080224   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.150320   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.237169   15352 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:17:57.240932   15352 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:17:57.240947   15352 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:17:57.240960   15352 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:17:57.240965   15352 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:17:57.240973   15352 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:17:57.241075   15352 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:17:57.241205   15352 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:17:57.241347   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:17:57.249423   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:17:57.266686   15352 start.go:309] post-start completed in 186.579963ms
	I0602 11:17:57.266764   15352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:17:57.266809   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.337389   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.419423   15352 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:17:57.423756   15352 fix.go:57] fixHost completed within 2.341450978s
	I0602 11:17:57.423771   15352 start.go:81] releasing machines lock for "embed-certs-20220602111648-2113", held for 2.341488916s
	I0602 11:17:57.423846   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:57.493832   15352 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:17:57.493842   15352 ssh_runner.go:195] Run: systemctl --version
	I0602 11:17:57.493909   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.493898   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.571385   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.572948   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.784521   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:17:57.797372   15352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:17:57.806989   15352 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:17:57.807041   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:17:57.816005   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:17:57.829060   15352 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:17:57.898903   15352 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:17:57.967953   15352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:17:57.977779   15352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:17:58.050651   15352 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:17:58.060254   15352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:17:58.095467   15352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:17:58.172409   15352 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 11:17:58.172543   15352 cli_runner.go:164] Run: docker exec -t embed-certs-20220602111648-2113 dig +short host.docker.internal
	I0602 11:17:58.301503   15352 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:17:58.301604   15352 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:17:58.305905   15352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:17:58.316714   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:58.387831   15352 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:17:58.387911   15352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:17:58.416852   15352 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:17:58.416866   15352 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:17:58.416944   15352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:17:58.447690   15352 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:17:58.447713   15352 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:17:58.447820   15352 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:17:58.520455   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:17:58.520468   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:17:58.520483   15352 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:17:58.520502   15352 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220602111648-2113 NodeName:embed-certs-20220602111648-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:17:58.520613   15352 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220602111648-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:17:58.520681   15352 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220602111648-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 11:17:58.520742   15352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 11:17:58.528337   15352 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:17:58.528400   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:17:58.535248   15352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0602 11:17:58.547429   15352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:17:58.559653   15352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0602 11:17:58.572912   15352 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:17:58.576677   15352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:17:58.585837   15352 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113 for IP: 192.168.58.2
	I0602 11:17:58.585959   15352 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:17:58.586013   15352 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:17:58.586093   15352 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/client.key
	I0602 11:17:58.586153   15352 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.key.cee25041
	I0602 11:17:58.586215   15352 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.key
	I0602 11:17:58.586412   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:17:58.586453   15352 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:17:58.586477   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:17:58.586519   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:17:58.586551   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:17:58.586580   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:17:58.586639   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:17:58.587181   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:17:58.604132   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 11:17:58.620640   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:17:58.637561   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:17:58.654357   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:17:58.671422   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:17:58.687905   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:17:58.704559   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:17:58.721152   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:17:58.738095   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:17:58.754705   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:17:58.771067   15352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:17:58.783467   15352 ssh_runner.go:195] Run: openssl version
	I0602 11:17:58.788645   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:17:58.796302   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.800112   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.800156   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.805418   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:17:58.812620   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:17:58.820133   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.824238   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.824280   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.829346   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:17:58.836768   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:17:58.844364   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.848158   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.848204   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.853444   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:17:58.860527   15352 kubeadm.go:395] StartCluster: {Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:58.860620   15352 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:17:58.889454   15352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:17:58.897140   15352 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:17:58.897153   15352 kubeadm.go:626] restartCluster start
	I0602 11:17:58.897196   15352 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:17:58.903854   15352 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:58.903907   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:58.974750   15352 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220602111648-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:17:58.975016   15352 kubeconfig.go:127] "embed-certs-20220602111648-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:17:58.975368   15352 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:17:58.976710   15352 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:17:58.984402   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:58.984445   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:58.992514   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.194646   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.194824   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.205800   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.394596   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.394711   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.404574   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.592620   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.592742   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.603566   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.792706   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.792789   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.801888   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.992644   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.992738   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.004887   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.194652   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.194785   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.205062   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.394638   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.394783   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.405305   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.593032   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.593156   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.602450   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.793140   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.793270   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.803822   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.992792   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.992919   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.003646   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.194714   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.194891   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.206158   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.393563   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.393610   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.402165   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.593865   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.593962   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.604645   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.794719   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.794882   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.806019   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.993241   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.993427   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:02.004637   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.004647   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:02.004690   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:02.012637   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.012650   15352 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:18:02.012657   15352 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:18:02.012720   15352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:18:02.043235   15352 docker.go:442] Stopping containers: [6b1ddf58ceb9 2443900e874e db141163e6d4 0356cd90224b f1d263c9b0f1 14883f2e0c47 2b1660b40df3 2259cd9108be 1277daa5a30b 8f0298e2ec89 9fa8e7282212 4f92dc954d61 bbe61b313255 6db85ab616c7 703d34253678 d3aedabaf004]
	I0602 11:18:02.043308   15352 ssh_runner.go:195] Run: docker stop 6b1ddf58ceb9 2443900e874e db141163e6d4 0356cd90224b f1d263c9b0f1 14883f2e0c47 2b1660b40df3 2259cd9108be 1277daa5a30b 8f0298e2ec89 9fa8e7282212 4f92dc954d61 bbe61b313255 6db85ab616c7 703d34253678 d3aedabaf004
	I0602 11:18:02.073833   15352 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:18:02.087788   15352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:18:02.095874   15352 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  2 18:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  2 18:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  2 18:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  2 18:17 /etc/kubernetes/scheduler.conf
	
	I0602 11:18:02.095938   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 11:18:02.103319   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 11:18:02.110716   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 11:18:02.117486   15352 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.117534   15352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 11:18:02.124006   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 11:18:02.130595   15352 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.130640   15352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 11:18:02.137026   15352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:18:02.143920   15352 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:18:02.143937   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:02.186111   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:02.940146   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.065256   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.113758   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.165838   15352 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:18:03.165901   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:03.677915   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:04.176018   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:04.191155   15352 api_server.go:71] duration metric: took 1.025302471s to wait for apiserver process to appear ...
	I0602 11:18:04.191173   15352 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:18:04.191182   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:04.192377   15352 api_server.go:256] stopped: https://127.0.0.1:54894/healthz: Get "https://127.0.0.1:54894/healthz": EOF
	I0602 11:18:04.693127   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.094069   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 11:18:07.094108   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 11:18:07.193195   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.202009   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:07.202029   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:07.693364   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.700473   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:07.700494   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:08.192616   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:08.197675   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:08.197689   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:08.692589   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:08.697963   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 200:
	ok
	I0602 11:18:08.704402   15352 api_server.go:140] control plane version: v1.23.6
	I0602 11:18:08.704415   15352 api_server.go:130] duration metric: took 4.513159523s to wait for apiserver health ...
	I0602 11:18:08.704422   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:18:08.704427   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:18:08.704436   15352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:18:08.712420   15352 system_pods.go:59] 8 kube-system pods found
	I0602 11:18:08.712443   15352 system_pods.go:61] "coredns-64897985d-mqhps" [a9db0af0-c7e2-43f0-94d1-285cf82eefc6] Running
	I0602 11:18:08.712450   15352 system_pods.go:61] "etcd-embed-certs-20220602111648-2113" [655c91b8-a19a-4a3d-8fc4-4bb99628728c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 11:18:08.712457   15352 system_pods.go:61] "kube-apiserver-embed-certs-20220602111648-2113" [1c169e07-9698-455b-bc45-fb6268c818dd] Running
	I0602 11:18:08.712463   15352 system_pods.go:61] "kube-controller-manager-embed-certs-20220602111648-2113" [8dabcc9b-0bff-45c0-b617-b673244bb05e] Running
	I0602 11:18:08.712467   15352 system_pods.go:61] "kube-proxy-hxhmn" [0b00b834-77d9-498a-b6f4-73ada68667be] Running
	I0602 11:18:08.712471   15352 system_pods.go:61] "kube-scheduler-embed-certs-20220602111648-2113" [2d987b9c-0f04-4851-bdb4-d9d1eefcc598] Running
	I0602 11:18:08.712481   15352 system_pods.go:61] "metrics-server-b955d9d8-5k65t" [27770582-e78d-4495-83a5-a03c3c22b6ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:18:08.712489   15352 system_pods.go:61] "storage-provisioner" [971f85e7-9555-4ad3-aada-015be49207a6] Running
	I0602 11:18:08.712494   15352 system_pods.go:74] duration metric: took 8.053604ms to wait for pod list to return data ...
	I0602 11:18:08.712501   15352 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:18:08.718457   15352 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:18:08.718474   15352 node_conditions.go:123] node cpu capacity is 6
	I0602 11:18:08.718485   15352 node_conditions.go:105] duration metric: took 5.979977ms to run NodePressure ...
	I0602 11:18:08.718498   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:08.917133   15352 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0602 11:18:08.963399   15352 kubeadm.go:777] kubelet initialised
	I0602 11:18:08.963410   15352 kubeadm.go:778] duration metric: took 46.263216ms waiting for restarted kubelet to initialise ...
	I0602 11:18:08.963418   15352 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:18:08.968510   15352 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-mqhps" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:08.973930   15352 pod_ready.go:92] pod "coredns-64897985d-mqhps" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:08.973941   15352 pod_ready.go:81] duration metric: took 5.418497ms waiting for pod "coredns-64897985d-mqhps" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:08.973947   15352 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:10.987864   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:13.489319   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:15.984994   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:17.985135   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:20.487923   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:20.984961   15352 pod_ready.go:92] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:20.984975   15352 pod_ready.go:81] duration metric: took 12.010814852s waiting for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:20.984981   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:22.996747   15352 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:23.497076   15352 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.497088   15352 pod_ready.go:81] duration metric: took 2.512058532s waiting for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.497094   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.500990   15352 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.500999   15352 pod_ready.go:81] duration metric: took 3.899621ms waiting for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.501005   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hxhmn" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.504762   15352 pod_ready.go:92] pod "kube-proxy-hxhmn" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.504770   15352 pod_ready.go:81] duration metric: took 3.760621ms waiting for pod "kube-proxy-hxhmn" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.504775   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.508796   15352 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.508803   15352 pod_ready.go:81] duration metric: took 4.023396ms waiting for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.508810   15352 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:25.519475   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:28.019880   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:30.021312   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:32.520124   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:35.018464   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:37.019378   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:39.020228   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:41.520520   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:44.019685   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:46.021361   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:48.517860   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:50.519722   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:52.520558   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:55.021033   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:57.518515   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:59.520949   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:01.521775   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:04.020252   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:06.021659   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:08.522036   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:11.019578   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:13.021252   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:15.519890   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:17.522449   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:20.019069   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:22.022494   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:24.519019   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:26.520994   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:29.019342   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:31.021808   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:33.518558   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:35.522527   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:38.019317   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:40.021350   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:42.519178   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:44.522452   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:47.020277   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:49.020861   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:51.021940   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:53.522777   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:56.022962   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:58.023294   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:00.519960   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:02.521430   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:05.022687   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:07.522208   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:10.021463   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:12.519965   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:14.522183   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:17.021383   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:19.023054   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:21.520910   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:23.523643   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:26.021449   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:28.023761   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:30.522348   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:33.024537   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:35.523518   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:37.523926   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:40.023533   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:42.520330   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:44.521363   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:46.523702   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:49.021771   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:51.022021   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:53.022137   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:55.024682   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:57.522459   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:00.022039   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:02.022164   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:04.022963   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:06.023102   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:08.520914   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:10.522452   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:13.022353   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:15.024327   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:17.024604   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:19.024700   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:21.521873   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:24.026794   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:26.523991   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:29.022868   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:31.023261   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:33.023747   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:35.024513   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:37.522052   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:39.523349   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:41.523819   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:44.023580   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:46.524426   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:48.524790   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:51.025030   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:53.522632   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:55.523997   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:57.526073   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:00.025125   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:02.522387   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:04.525282   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:07.024864   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:09.523673   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:11.524761   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:13.525553   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:16.023071   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:18.023459   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:20.525701   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:23.023773   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:23.517112   15352 pod_ready.go:81] duration metric: took 4m0.004136963s waiting for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" ...
	E0602 11:22:23.517134   15352 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" (will not retry!)
	I0602 11:22:23.517161   15352 pod_ready.go:38] duration metric: took 4m14.54933227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:22:23.517193   15352 kubeadm.go:630] restartCluster took 4m24.615456672s
	W0602 11:22:23.517311   15352 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0602 11:22:23.517339   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:04:51 UTC, end at Thu 2022-06-02 18:22:33 UTC. --
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 systemd[1]: Starting Docker Application Container Engine...
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.822221462Z" level=info msg="Starting up"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824058418Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824139651Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824195269Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824296574Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825626806Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825660593Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825673330Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825685292Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.830709849Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.834670305Z" level=info msg="Loading containers: start."
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.916131885Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.947713032Z" level=info msg="Loading containers: done."
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.958029440Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.958093467Z" level=info msg="Daemon has completed initialization"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 systemd[1]: Started Docker Application Container Engine.
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.983186383Z" level=info msg="API listen on [::]:2376"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.985769795Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-06-02T18:22:35Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:22:35 up  1:10,  0 users,  load average: 0.14, 0.64, 0.92
	Linux old-k8s-version-20220602105906-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:04:51 UTC, end at Thu 2022-06-02 18:22:35 UTC. --
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 kubelet[24378]: I0602 18:22:34.906901   24378 server.go:410] Version: v1.16.0
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 kubelet[24378]: I0602 18:22:34.907229   24378 plugins.go:100] No cloud provider specified.
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 kubelet[24378]: I0602 18:22:34.907243   24378 server.go:773] Client rotation is on, will bootstrap in background
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 kubelet[24378]: I0602 18:22:34.910747   24378 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 kubelet[24378]: W0602 18:22:34.911353   24378 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 kubelet[24378]: W0602 18:22:34.911415   24378 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 kubelet[24378]: F0602 18:22:34.911442   24378 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 02 18:22:34 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 kubelet[24414]: I0602 18:22:35.655168   24414 server.go:410] Version: v1.16.0
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 kubelet[24414]: I0602 18:22:35.655494   24414 plugins.go:100] No cloud provider specified.
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 kubelet[24414]: I0602 18:22:35.655525   24414 server.go:773] Client rotation is on, will bootstrap in background
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 kubelet[24414]: I0602 18:22:35.657261   24414 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 kubelet[24414]: W0602 18:22:35.657893   24414 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 kubelet[24414]: W0602 18:22:35.657954   24414 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 kubelet[24414]: F0602 18:22:35.657977   24414 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 02 18:22:35 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 11:22:35.509498   15523 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 2 (461.536966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220602105906-2113" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (43.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220602110711-2113 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113
E0602 11:14:03.539583    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113: exit status 2 (16.112501893s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113: exit status 2 (16.110044376s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220602110711-2113 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220602110711-2113
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220602110711-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5",
	        "Created": "2022-06-02T18:07:17.909147477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222470,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:08:16.779614085Z",
	            "FinishedAt": "2022-06-02T18:08:14.847555704Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5/hostname",
	        "HostsPath": "/var/lib/docker/containers/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5/hosts",
	        "LogPath": "/var/lib/docker/containers/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5-json.log",
	        "Name": "/default-k8s-different-port-20220602110711-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602110711-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602110711-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/749671fe85bbaa8f0cf9f29b9ab3d51afdbf4dae17a1592400c7a586e732a945-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/749671fe85bbaa8f0cf9f29b9ab3d51afdbf4dae17a1592400c7a586e732a945/merged",
	                "UpperDir": "/var/lib/docker/overlay2/749671fe85bbaa8f0cf9f29b9ab3d51afdbf4dae17a1592400c7a586e732a945/diff",
	                "WorkDir": "/var/lib/docker/overlay2/749671fe85bbaa8f0cf9f29b9ab3d51afdbf4dae17a1592400c7a586e732a945/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602110711-2113",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602110711-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602110711-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602110711-2113",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602110711-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb30e712988b82480f56a77f37ed83f25e19054ed8c00505f88cc31c5c7055e7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52980"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52981"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52982"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52983"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/eb30e712988b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602110711-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "15da6650a4b7",
	                        "default-k8s-different-port-20220602110711-2113"
	                    ],
	                    "NetworkID": "fe40b6b9d189fb34bb611388ce54fac245dc51e55f85ea4b41021b7f6808cdc7",
	                    "EndpointID": "d3bda36e318da8f19ed8e633b041bed1c31187a0fb70a3e015d79abfbb51e6ab",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220602110711-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220602110711-2113 logs -n 25: (2.548543506s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p kubenet-20220602104455-2113                    | kubenet-20220602104455-2113                    | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	| delete  | -p                                                | disable-driver-mounts-20220602105918-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | disable-driver-mounts-20220602105918-2113         |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | no-preload-20220602105919-2113                    | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220602105919-2113                    | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220602105906-2113               | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:12 PDT | 02 Jun 22 11:13 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:08:15
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:08:15.517716   14271 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:08:15.517914   14271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:08:15.517920   14271 out.go:309] Setting ErrFile to fd 2...
	I0602 11:08:15.517924   14271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:08:15.518039   14271 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:08:15.518296   14271 out.go:303] Setting JSON to false
	I0602 11:08:15.533877   14271 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4064,"bootTime":1654189231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:08:15.534006   14271 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:08:15.555791   14271 out.go:177] * [default-k8s-different-port-20220602110711-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:08:15.597880   14271 notify.go:193] Checking for updates...
	I0602 11:08:15.619617   14271 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:08:15.640808   14271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:08:15.661783   14271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:08:15.682595   14271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:08:15.703785   14271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:08:15.725094   14271 config.go:178] Loaded profile config "default-k8s-different-port-20220602110711-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:08:15.725430   14271 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:08:15.796906   14271 docker.go:137] docker version: linux-20.10.14
	I0602 11:08:15.797053   14271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:08:15.922561   14271 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:08:15.86746037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:08:15.966236   14271 out.go:177] * Using the docker driver based on existing profile
	I0602 11:08:15.988390   14271 start.go:284] selected driver: docker
	I0602 11:08:15.988424   14271 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220602110711-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220602110711-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:08:15.988564   14271 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:08:15.991998   14271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:08:16.114994   14271 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:08:16.062502247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:08:16.115182   14271 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:08:16.115205   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:08:16.115214   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:08:16.115223   14271 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220602110711-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602110711-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:08:16.158958   14271 out.go:177] * Starting control plane node default-k8s-different-port-20220602110711-2113 in cluster default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.181099   14271 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:08:16.202847   14271 out.go:177] * Pulling base image ...
	I0602 11:08:16.244857   14271 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:08:16.244892   14271 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:08:16.244926   14271 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 11:08:16.244951   14271 cache.go:57] Caching tarball of preloaded images
	I0602 11:08:16.245139   14271 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:08:16.245160   14271 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 11:08:16.246083   14271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/config.json ...
	I0602 11:08:16.310676   14271 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:08:16.310691   14271 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:08:16.310699   14271 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:08:16.310742   14271 start.go:352] acquiring machines lock for default-k8s-different-port-20220602110711-2113: {Name:mk5c32f64296c6672223bdc5496081160863f257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:08:16.310822   14271 start.go:356] acquired machines lock for "default-k8s-different-port-20220602110711-2113" in 60.649µs
	I0602 11:08:16.310842   14271 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:08:16.310853   14271 fix.go:55] fixHost starting: 
	I0602 11:08:16.311066   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:08:16.377507   14271 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220602110711-2113: state=Stopped err=<nil>
	W0602 11:08:16.377551   14271 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:08:16.399302   14271 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220602110711-2113" ...
	I0602 11:08:16.420479   14271 cli_runner.go:164] Run: docker start default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.774466   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:08:16.847223   14271 kic.go:416] container "default-k8s-different-port-20220602110711-2113" state is running.
	I0602 11:08:16.847828   14271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.920874   14271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/config.json ...
	I0602 11:08:16.921257   14271 machine.go:88] provisioning docker machine ...
	I0602 11:08:16.921280   14271 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220602110711-2113"
	I0602 11:08:16.921351   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.993938   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:16.994122   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:16.994150   14271 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220602110711-2113 && echo "default-k8s-different-port-20220602110711-2113" | sudo tee /etc/hostname
	I0602 11:08:17.119677   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220602110711-2113
	
	I0602 11:08:17.119769   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.193462   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:17.193625   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:17.193641   14271 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220602110711-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220602110711-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220602110711-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:08:17.313470   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:08:17.313494   14271 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:08:17.313514   14271 ubuntu.go:177] setting up certificates
	I0602 11:08:17.313526   14271 provision.go:83] configureAuth start
	I0602 11:08:17.313600   14271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.386535   14271 provision.go:138] copyHostCerts
	I0602 11:08:17.386632   14271 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:08:17.386642   14271 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:08:17.386747   14271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:08:17.386997   14271 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:08:17.387004   14271 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:08:17.387064   14271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:08:17.387225   14271 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:08:17.387231   14271 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:08:17.387292   14271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:08:17.387411   14271 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220602110711-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220602110711-2113]
	I0602 11:08:17.434515   14271 provision.go:172] copyRemoteCerts
	I0602 11:08:17.434580   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:08:17.434625   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.506502   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:17.593925   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:08:17.614967   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0602 11:08:17.637005   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:08:17.658235   14271 provision.go:86] duration metric: configureAuth took 344.691133ms
	I0602 11:08:17.658249   14271 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:08:17.658395   14271 config.go:178] Loaded profile config "default-k8s-different-port-20220602110711-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:08:17.658448   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.730610   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:17.730757   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:17.730766   14271 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:08:17.850560   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:08:17.850583   14271 ubuntu.go:71] root file system type: overlay
	I0602 11:08:17.850750   14271 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:08:17.850832   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.922108   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:17.922253   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:17.922301   14271 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:08:18.046181   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:08:18.046271   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.117615   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:18.117752   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:18.117764   14271 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:08:18.238940   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:08:18.238960   14271 machine.go:91] provisioned docker machine in 1.317671465s
	I0602 11:08:18.238969   14271 start.go:306] post-start starting for "default-k8s-different-port-20220602110711-2113" (driver="docker")
	I0602 11:08:18.238974   14271 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:08:18.239040   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:08:18.239086   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.309021   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.395195   14271 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:08:18.398736   14271 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:08:18.398753   14271 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:08:18.398761   14271 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:08:18.398769   14271 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:08:18.398779   14271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:08:18.398885   14271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:08:18.399033   14271 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:08:18.399193   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:08:18.406089   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:08:18.423802   14271 start.go:309] post-start completed in 184.82013ms
	I0602 11:08:18.423883   14271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:08:18.423931   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.493419   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.577352   14271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:08:18.582028   14271 fix.go:57] fixHost completed within 2.271136565s
	I0602 11:08:18.582039   14271 start.go:81] releasing machines lock for "default-k8s-different-port-20220602110711-2113", held for 2.271170149s
	I0602 11:08:18.582108   14271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.652251   14271 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:08:18.652251   14271 ssh_runner.go:195] Run: systemctl --version
	I0602 11:08:18.652335   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.652339   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.729373   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.731038   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.813622   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:08:18.943560   14271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:08:18.954030   14271 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:08:18.954084   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:08:18.963406   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:08:18.976091   14271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:08:19.040894   14271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:08:19.108714   14271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:08:19.118700   14271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:08:19.185811   14271 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:08:19.195192   14271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:08:19.228635   14271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:08:15.221956   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:15.271807   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.271819   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:15.271873   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:15.303439   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.303452   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:15.303518   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:15.333961   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.333988   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:15.334084   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:15.364875   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.364888   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:15.364950   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:15.395700   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.395712   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:15.395765   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:15.424510   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.424520   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:15.424572   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:15.453415   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.453428   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:15.453493   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:15.483708   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.483719   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:15.483724   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:15.483730   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:15.538743   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:15.538752   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:15.538758   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:15.550783   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:15.550794   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:17.605845   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055003078s)
	I0602 11:08:17.605979   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:17.605988   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:17.649331   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:17.649353   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:20.164014   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:19.305934   14271 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 11:08:19.306113   14271 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220602110711-2113 dig +short host.docker.internal
	I0602 11:08:19.446242   14271 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:08:19.446326   14271 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:08:19.450862   14271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:08:19.460634   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:19.531276   14271 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:08:19.531337   14271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:08:19.561235   14271 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:08:19.561251   14271 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:08:19.561312   14271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:08:19.591189   14271 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:08:19.591211   14271 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:08:19.591282   14271 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:08:19.665013   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:08:19.665024   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:08:19.665044   14271 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:08:19.665056   14271 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220602110711-2113 NodeName:default-k8s-different-port-20220602110711-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 Cgroup
Driver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:08:19.665176   14271 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220602110711-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:08:19.665248   14271 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220602110711-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602110711-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0602 11:08:19.665304   14271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 11:08:19.673262   14271 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:08:19.673322   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:08:19.680190   14271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0602 11:08:19.692477   14271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:08:19.704606   14271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0602 11:08:19.717011   14271 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:08:19.720737   14271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:08:19.730066   14271 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113 for IP: 192.168.58.2
	I0602 11:08:19.730171   14271 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:08:19.730221   14271 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:08:19.730312   14271 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.key
	I0602 11:08:19.730378   14271 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/apiserver.key.cee25041
	I0602 11:08:19.730457   14271 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/proxy-client.key
	I0602 11:08:19.730674   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:08:19.730711   14271 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:08:19.730724   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:08:19.730754   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:08:19.730789   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:08:19.730822   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:08:19.730884   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:08:19.731420   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:08:19.748043   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 11:08:19.764498   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:08:19.781157   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:08:19.797871   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:08:19.814159   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:08:19.830887   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:08:19.848080   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:08:19.865456   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:08:19.881698   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:08:19.898483   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:08:19.914958   14271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:08:19.927686   14271 ssh_runner.go:195] Run: openssl version
	I0602 11:08:19.932835   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:08:19.940543   14271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:08:19.944572   14271 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:08:19.944611   14271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:08:19.949643   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:08:19.956574   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:08:19.964137   14271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:08:19.967898   14271 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:08:19.967937   14271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:08:19.973115   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:08:19.980514   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:08:19.988285   14271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:08:19.991947   14271 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:08:19.991984   14271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:08:19.997046   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:08:20.004017   14271 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220602110711-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602110711-2113
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:08:20.004132   14271 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:08:20.033806   14271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:08:20.041165   14271 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:08:20.041189   14271 kubeadm.go:626] restartCluster start
	I0602 11:08:20.041238   14271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:08:20.047947   14271 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.047999   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:20.119320   14271 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220602110711-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:08:20.119501   14271 kubeconfig.go:127] "default-k8s-different-port-20220602110711-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:08:20.119891   14271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:08:20.121169   14271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:08:20.128818   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.128866   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.140758   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.341344   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.341425   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.350851   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.221322   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:20.272710   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.272723   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:20.272780   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:20.303113   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.303125   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:20.303179   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:20.332713   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.332726   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:20.332786   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:20.363526   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.363541   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:20.363604   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:20.393277   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.393290   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:20.393345   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:20.423123   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.423136   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:20.423189   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:20.452818   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.452831   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:20.452894   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:20.482672   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.482685   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:20.482691   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:20.482699   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:20.537779   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:20.537790   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:20.537797   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:20.551744   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:20.551756   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:22.603781   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051975725s)
	I0602 11:08:22.603889   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:22.603895   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:22.641201   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:22.641214   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:25.154798   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:20.540903   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.541022   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.549461   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.740967   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.741117   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.752173   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.940840   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.940902   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.949819   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.142949   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.143091   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.153503   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.341193   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.341297   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.352208   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.542948   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.543068   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.553688   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.742445   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.742610   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.752897   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.941532   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.941622   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.952125   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.143019   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.143112   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.154053   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.342959   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.343122   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.354067   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.541852   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.541959   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.552227   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.743005   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.743174   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.753673   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.941169   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.941282   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.951571   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.143019   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:23.143121   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:23.154033   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.154043   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:23.154095   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:23.162400   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.162410   14271 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:08:23.162418   14271 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:08:23.162473   14271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:08:23.192549   14271 docker.go:442] Stopping containers: [5424fc41e82a 5f5b0dd7b333 f35280654931 b9a9032aa6a0 5f2b057e31f6 0a04721ed918 e3c1dd0cd3c0 d432e94b8645 553b06952827 41e494ce31b3 947af7b50e63 059f7d232752 d3a03a2fc0b9 bf8a809c5a96 cff10caa9374 680bea8fcf84]
	I0602 11:08:23.192630   14271 ssh_runner.go:195] Run: docker stop 5424fc41e82a 5f5b0dd7b333 f35280654931 b9a9032aa6a0 5f2b057e31f6 0a04721ed918 e3c1dd0cd3c0 d432e94b8645 553b06952827 41e494ce31b3 947af7b50e63 059f7d232752 d3a03a2fc0b9 bf8a809c5a96 cff10caa9374 680bea8fcf84
	I0602 11:08:23.222876   14271 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:08:23.233125   14271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:08:23.240768   14271 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  2 18:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  2 18:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  2 18:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  2 18:07 /etc/kubernetes/scheduler.conf
	
	I0602 11:08:23.240824   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0602 11:08:23.248274   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0602 11:08:23.255564   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0602 11:08:23.263617   14271 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.263680   14271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 11:08:23.270956   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0602 11:08:23.278150   14271 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.278193   14271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 11:08:23.284827   14271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:08:23.292008   14271 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:08:23.292025   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:23.336140   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.189152   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.321146   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.367977   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.415440   14271 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:08:24.415503   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:24.926339   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:25.424317   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:25.476098   14271 api_server.go:71] duration metric: took 1.060645549s to wait for apiserver process to appear ...
	I0602 11:08:25.476124   14271 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:08:25.476138   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:25.477296   14271 api_server.go:256] stopped: https://127.0.0.1:52983/healthz: Get "https://127.0.0.1:52983/healthz": EOF
	I0602 11:08:25.221414   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:25.296178   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.296191   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:25.296260   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:25.329053   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.329071   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:25.329164   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:25.357741   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.357752   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:25.357810   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:25.390667   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.390682   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:25.390741   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:25.437576   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.437588   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:25.437644   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:25.466359   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.466375   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:25.466456   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:25.502948   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.502962   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:25.503019   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:25.538129   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.538146   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:25.538154   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:25.538162   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:25.582011   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:25.582029   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:25.595600   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:25.595615   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:25.652328   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:25.652345   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:25.652351   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:25.665370   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:25.665381   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:27.726129   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060700298s)
	I0602 11:08:25.977412   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:27.865104   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 11:08:27.865120   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 11:08:27.978216   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:27.984906   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:08:27.984929   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:08:28.477488   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:28.484388   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:08:28.484405   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:08:28.977988   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:28.983267   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:08:28.983291   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:08:29.478044   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:29.483906   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 200:
	ok
	I0602 11:08:29.490553   14271 api_server.go:140] control plane version: v1.23.6
	I0602 11:08:29.490564   14271 api_server.go:130] duration metric: took 4.014365072s to wait for apiserver health ...
	I0602 11:08:29.490572   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:08:29.490579   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:08:29.490591   14271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:08:29.498298   14271 system_pods.go:59] 8 kube-system pods found
	I0602 11:08:29.498313   14271 system_pods.go:61] "coredns-64897985d-h47dc" [7accc8c2-babb-4fb2-a915-34bdcaf81942] Running
	I0602 11:08:29.498323   14271 system_pods.go:61] "etcd-default-k8s-different-port-20220602110711-2113" [9a73a84a-8a22-4366-a66d-df315295a7a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 11:08:29.498328   14271 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220602110711-2113" [c11ca282-ae9e-4bb4-9517-d6c8bd9deab8] Running
	I0602 11:08:29.498333   14271 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220602110711-2113" [f8bd0bd0-acca-48d9-8f9f-33abf2cb6de2] Running
	I0602 11:08:29.498337   14271 system_pods.go:61] "kube-proxy-jrk2q" [7fa38b28-1f8b-4ef3-9983-3724a52b8b00] Running
	I0602 11:08:29.498341   14271 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220602110711-2113" [5fa1cd09-e48e-465c-8a2c-fc11ab91bb5d] Running
	I0602 11:08:29.498348   14271 system_pods.go:61] "metrics-server-b955d9d8-lnk7h" [a26e7c1f-21ad-400e-9ea2-7d626d72922d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:08:29.498356   14271 system_pods.go:61] "storage-provisioner" [1e7818f7-f246-4230-bd2a-1013266312d3] Running
	I0602 11:08:29.498361   14271 system_pods.go:74] duration metric: took 7.764866ms to wait for pod list to return data ...
	I0602 11:08:29.498367   14271 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:08:29.501391   14271 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:08:29.501404   14271 node_conditions.go:123] node cpu capacity is 6
	I0602 11:08:29.501415   14271 node_conditions.go:105] duration metric: took 3.043692ms to run NodePressure ...
	I0602 11:08:29.501426   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:29.615914   14271 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0602 11:08:29.619660   14271 kubeadm.go:777] kubelet initialised
	I0602 11:08:29.619670   14271 kubeadm.go:778] duration metric: took 3.743155ms waiting for restarted kubelet to initialise ...
	I0602 11:08:29.619678   14271 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:08:29.624145   14271 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-h47dc" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:29.628299   14271 pod_ready.go:92] pod "coredns-64897985d-h47dc" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:29.628307   14271 pod_ready.go:81] duration metric: took 4.151112ms waiting for pod "coredns-64897985d-h47dc" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:29.628314   14271 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:30.226574   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:30.721539   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:30.759508   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.759521   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:30.759579   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:30.792623   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.792637   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:30.792712   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:30.822014   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.822028   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:30.822086   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:30.851154   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.851168   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:30.851240   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:30.880918   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.880931   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:30.880986   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:30.910502   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.910515   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:30.910577   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:30.941645   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.941657   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:30.941714   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:30.972909   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.972921   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:30.972928   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:30.972934   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:30.984875   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:30.984888   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:31.040921   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:31.040935   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:31.040942   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:31.053333   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:31.053346   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:33.107850   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05445655s)
	I0602 11:08:33.107952   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:33.107959   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:31.641210   14271 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:33.641265   14271 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:35.650135   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:35.721787   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:35.751661   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.751673   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:35.751730   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:35.780322   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.780334   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:35.780393   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:35.809983   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.809996   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:35.810052   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:35.838069   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.838081   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:35.838140   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:35.866612   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.866629   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:35.866713   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:35.897341   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.897354   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:35.897409   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:35.928444   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.928456   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:35.928513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:35.956497   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.956510   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:35.956517   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:35.956524   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:35.969093   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:35.969108   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:38.024274   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055118179s)
	I0602 11:08:38.024385   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:38.024393   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:38.064021   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:38.064037   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:38.075931   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:38.075944   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:38.130990   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:35.642462   14271 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:36.642462   14271 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:36.642475   14271 pod_ready.go:81] duration metric: took 7.014033821s waiting for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:36.642481   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:38.655878   14271 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:40.632494   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:40.722073   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:40.750220   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.750232   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:40.750297   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:40.778245   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.778256   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:40.778304   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:40.807262   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.807273   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:40.807333   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:40.836172   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.836183   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:40.836239   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:40.864838   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.864850   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:40.864906   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:40.893840   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.893852   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:40.893910   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:40.923704   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.923715   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:40.923773   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:40.951957   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.951970   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:40.951978   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:40.951986   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:41.004848   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:41.004859   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:41.004865   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:41.017334   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:41.017346   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:43.066770   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0493766s)
	I0602 11:08:43.066886   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:43.066894   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:43.107798   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:43.107814   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:41.154674   14271 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:43.156222   14271 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:43.156234   14271 pod_ready.go:81] duration metric: took 6.513634404s waiting for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:43.156241   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.668817   14271 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:44.668829   14271 pod_ready.go:81] duration metric: took 1.512556931s waiting for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.668835   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrk2q" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.673173   14271 pod_ready.go:92] pod "kube-proxy-jrk2q" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:44.673180   14271 pod_ready.go:81] duration metric: took 4.340525ms waiting for pod "kube-proxy-jrk2q" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.673186   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.677163   14271 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:44.677170   14271 pod_ready.go:81] duration metric: took 3.980246ms waiting for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.677176   14271 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:45.621045   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:45.722513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:45.753852   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.753863   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:45.753920   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:45.782032   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.782044   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:45.782103   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:45.811660   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.811672   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:45.811730   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:45.841102   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.841115   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:45.841176   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:45.869555   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.869568   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:45.869625   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:45.896999   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.897011   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:45.897079   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:45.925033   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.925045   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:45.925100   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:45.955532   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.955543   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:45.955550   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:45.955556   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:45.994815   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:45.994828   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:46.006706   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:46.006718   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:46.059309   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:46.059318   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:46.059325   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:46.071706   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:46.071719   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:48.125554   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053788045s)
	I0602 11:08:46.690067   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:49.192051   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:50.627972   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:50.722301   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:50.752680   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.752693   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:50.752749   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:50.781019   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.781032   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:50.781090   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:50.810077   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.810088   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:50.810152   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:50.839097   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.839108   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:50.839164   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:50.870493   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.870504   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:50.870560   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:50.899156   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.899168   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:50.899224   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:50.927401   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.927413   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:50.927469   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:50.970889   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.970901   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:50.970908   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:50.970915   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:51.026070   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:51.026080   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:51.026086   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:51.037940   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:51.037952   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:53.091015   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053015843s)
	I0602 11:08:53.091123   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:53.091130   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:53.130767   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:53.130781   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:51.688335   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:53.689175   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:55.642775   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:55.722143   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:55.752596   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.752608   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:55.752663   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:55.781383   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.781395   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:55.781453   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:55.810740   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.810751   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:55.810806   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:55.839025   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.839037   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:55.839092   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:55.868111   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.868123   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:55.868185   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:55.896365   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.896376   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:55.896436   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:55.925240   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.925252   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:55.925308   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:55.954351   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.954362   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:55.954370   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:55.954377   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:55.994349   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:55.994360   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:56.006541   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:56.006553   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:56.060230   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:56.060240   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:56.060246   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:56.072372   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:56.072385   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:58.126471   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054039162s)
	I0602 11:08:56.187836   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:58.190416   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:00.626897   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:09:00.636995   13778 kubeadm.go:630] restartCluster took 4m5.698955011s
	W0602 11:09:00.637074   13778 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0602 11:09:00.637089   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:09:01.056935   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:09:01.066336   13778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:09:01.073784   13778 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:09:01.073830   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:09:01.081072   13778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:09:01.081099   13778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:09:01.817978   13778 out.go:204]   - Generating certificates and keys ...
	I0602 11:09:02.504280   13778 out.go:204]   - Booting up control plane ...
	I0602 11:09:00.687408   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:02.689765   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:04.689850   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:07.189249   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:09.190335   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:11.691237   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:14.187781   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:16.190080   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:18.687798   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:20.690432   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:23.187958   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:25.190427   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:27.687964   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:29.691339   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:32.188132   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:34.189396   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:36.189672   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:38.689846   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:41.188841   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:43.189653   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:45.190339   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:47.690415   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:50.188091   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:52.191824   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:54.690834   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:56.691875   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:59.189437   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:01.190943   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:03.191954   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:05.692452   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:07.692576   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:10.189968   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:12.690983   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:15.188184   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:17.189909   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:19.688905   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:21.691564   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:24.190443   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:26.690498   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:28.691268   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:31.190793   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:33.191155   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:35.690951   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:37.692551   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:40.193163   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:42.691386   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:44.692387   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:46.692685   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:49.193533   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:51.691604   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:53.693237   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	W0602 11:10:57.423207   13778 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0602 11:10:57.423236   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:10:57.840204   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:10:57.849925   13778 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:10:57.849972   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:10:57.857794   13778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:10:57.857811   13778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:10:58.606461   13778 out.go:204]   - Generating certificates and keys ...
	I0602 11:10:59.124567   13778 out.go:204]   - Booting up control plane ...
	I0602 11:10:56.192552   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:58.689473   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:00.693155   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:03.193549   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:05.194270   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:07.693653   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:10.192674   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:12.691715   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:14.691808   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:17.191371   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:19.193132   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:21.193202   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:23.691940   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:25.692807   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:27.692954   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:30.191988   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:32.194025   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:34.692688   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:36.692994   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:38.693797   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:41.193247   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:43.693628   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:45.694558   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:48.191576   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:50.193727   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:52.194036   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:54.194247   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:56.694218   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:59.193493   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:01.194007   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:03.194607   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:05.693468   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:07.693608   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:09.695228   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:12.194703   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:14.693976   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:17.192125   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:19.194163   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:21.194395   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:23.693999   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:26.191617   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:28.194216   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:30.694582   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:33.193720   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:35.694487   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:38.194086   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:40.693116   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:42.693433   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:44.686833   14271 pod_ready.go:81] duration metric: took 4m0.005479685s waiting for pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace to be "Ready" ...
	E0602 11:12:44.686847   14271 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace to be "Ready" (will not retry!)
	I0602 11:12:44.686859   14271 pod_ready.go:38] duration metric: took 4m15.062761979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:12:44.686881   14271 kubeadm.go:630] restartCluster took 4m24.641108189s
	W0602 11:12:44.686956   14271 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0602 11:12:44.686973   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:12:54.041678   13778 kubeadm.go:397] StartCluster complete in 7m59.136004493s
	I0602 11:12:54.041759   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:12:54.071372   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.071384   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:12:54.071441   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:12:54.100053   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.100066   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:12:54.100125   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:12:54.128275   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.128286   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:12:54.128343   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:12:54.157653   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.157665   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:12:54.157722   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:12:54.187430   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.187443   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:12:54.187496   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:12:54.215461   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.215472   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:12:54.215526   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:12:54.244945   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.244956   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:12:54.245011   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:12:54.274697   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.274709   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:12:54.274716   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:12:54.274725   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:12:54.287581   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:12:54.287595   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:12:56.340056   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052413965s)
	I0602 11:12:56.340164   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:12:56.340171   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:12:56.380800   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:12:56.380813   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:12:56.392375   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:12:56.392386   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:12:56.445060   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0602 11:12:56.445088   13778 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0602 11:12:56.445103   13778 out.go:239] * 
	W0602 11:12:56.445207   13778 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:12:56.445222   13778 out.go:239] * 
	W0602 11:12:56.445819   13778 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 11:12:56.530257   13778 out.go:177] 
	W0602 11:12:56.572600   13778 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:12:56.572701   13778 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0602 11:12:56.572743   13778 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0602 11:12:56.593452   13778 out.go:177] 
	I0602 11:13:14.184065   14271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (29.496567408s)
	I0602 11:13:14.184130   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:13:14.194290   14271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:13:14.202145   14271 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:13:14.202192   14271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:13:14.210100   14271 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:13:14.210122   14271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:13:14.712305   14271 out.go:204]   - Generating certificates and keys ...
	I0602 11:13:15.565732   14271 out.go:204]   - Booting up control plane ...
	I0602 11:13:22.111263   14271 out.go:204]   - Configuring RBAC rules ...
	I0602 11:13:22.490451   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:13:22.490462   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:13:22.490476   14271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 11:13:22.490574   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=default-k8s-different-port-20220602110711-2113 minikube.k8s.io/updated_at=2022_06_02T11_13_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:22.490580   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:22.596110   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:22.677315   14271 ops.go:34] apiserver oom_adj: -16
	I0602 11:13:23.220603   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:23.720089   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:24.218643   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:24.718821   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:25.218844   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:25.718927   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:26.220735   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:26.718665   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:27.220534   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:27.719096   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:28.219369   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:28.718683   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:29.218768   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:29.718884   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:30.218745   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:30.719801   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:31.220266   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:31.718699   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:32.220130   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:32.719009   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:33.218958   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:33.720809   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:34.220786   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:34.718815   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:35.218757   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:35.274837   14271 kubeadm.go:1045] duration metric: took 12.78411621s to wait for elevateKubeSystemPrivileges.
	I0602 11:13:35.274851   14271 kubeadm.go:397] StartCluster complete in 5m15.265389598s
	I0602 11:13:35.274869   14271 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:13:35.274953   14271 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:13:35.275477   14271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:13:35.790361   14271 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220602110711-2113" rescaled to 1
	I0602 11:13:35.790398   14271 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 11:13:35.829728   14271 out.go:177] * Verifying Kubernetes components...
	I0602 11:13:35.790423   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 11:13:35.790448   14271 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 11:13:35.790558   14271 config.go:178] Loaded profile config "default-k8s-different-port-20220602110711-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:13:35.888819   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:13:35.888817   14271 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888843   14271 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888865   14271 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220602110711-2113"
	W0602 11:13:35.888876   14271 addons.go:165] addon storage-provisioner should already be in state true
	I0602 11:13:35.888867   14271 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888869   14271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888875   14271 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888904   14271 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888920   14271 host.go:66] Checking if "default-k8s-different-port-20220602110711-2113" exists ...
	W0602 11:13:35.888925   14271 addons.go:165] addon dashboard should already be in state true
	I0602 11:13:35.888947   14271 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220602110711-2113"
	W0602 11:13:35.888967   14271 addons.go:165] addon metrics-server should already be in state true
	I0602 11:13:35.888978   14271 host.go:66] Checking if "default-k8s-different-port-20220602110711-2113" exists ...
	I0602 11:13:35.889061   14271 host.go:66] Checking if "default-k8s-different-port-20220602110711-2113" exists ...
	I0602 11:13:35.889232   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:35.889377   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:35.890065   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:35.892758   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:35.987270   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:35.987269   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 11:13:36.115814   14271 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 11:13:36.005658   14271 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:36.042737   14271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 11:13:36.078510   14271 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	W0602 11:13:36.115876   14271 addons.go:165] addon default-storageclass should already be in state true
	I0602 11:13:36.153934   14271 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 11:13:36.174686   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 11:13:36.174720   14271 host.go:66] Checking if "default-k8s-different-port-20220602110711-2113" exists ...
	I0602 11:13:36.174772   14271 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:13:36.211865   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 11:13:36.211944   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:36.211966   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:36.214784   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:36.248803   14271 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 11:13:36.268871   14271 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220602110711-2113" to be "Ready" ...
	I0602 11:13:36.285816   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 11:13:36.285833   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 11:13:36.285936   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:36.293989   14271 node_ready.go:49] node "default-k8s-different-port-20220602110711-2113" has status "Ready":"True"
	I0602 11:13:36.294010   14271 node_ready.go:38] duration metric: took 8.325769ms waiting for node "default-k8s-different-port-20220602110711-2113" to be "Ready" ...
	I0602 11:13:36.294020   14271 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:13:36.303564   14271 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-q7f6l" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:36.316138   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:13:36.319822   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:13:36.343187   14271 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 11:13:36.343200   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 11:13:36.343266   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:36.386409   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:13:36.428996   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:13:36.487887   14271 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 11:13:36.487899   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 11:13:36.565029   14271 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 11:13:36.565043   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 11:13:36.566040   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 11:13:36.566052   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 11:13:36.568172   14271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:13:36.582892   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 11:13:36.582907   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 11:13:36.584953   14271 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:13:36.584970   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 11:13:36.667739   14271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:13:36.677808   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 11:13:36.677830   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 11:13:36.683133   14271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 11:13:36.769655   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 11:13:36.769670   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 11:13:36.856979   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 11:13:36.856995   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 11:13:36.886658   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 11:13:36.886671   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 11:13:37.062085   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 11:13:37.062114   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 11:13:37.163633   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 11:13:37.163656   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 11:13:37.188519   14271 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.201194084s)
	I0602 11:13:37.188539   14271 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 11:13:37.255684   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:13:37.255698   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 11:13:37.292360   14271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:13:37.552623   14271 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:38.220990   14271 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0602 11:13:38.279795   14271 addons.go:417] enableAddons completed in 2.489282965s
	I0602 11:13:38.320701   14271 pod_ready.go:102] pod "coredns-64897985d-q7f6l" in "kube-system" namespace has status "Ready":"False"
	I0602 11:13:39.321022   14271 pod_ready.go:92] pod "coredns-64897985d-q7f6l" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.321036   14271 pod_ready.go:81] duration metric: took 3.017402616s waiting for pod "coredns-64897985d-q7f6l" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.321043   14271 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-qp56l" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.350329   14271 pod_ready.go:92] pod "coredns-64897985d-qp56l" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.350338   14271 pod_ready.go:81] duration metric: took 29.290028ms waiting for pod "coredns-64897985d-qp56l" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.350344   14271 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.355598   14271 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.355608   14271 pod_ready.go:81] duration metric: took 5.258951ms waiting for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.355614   14271 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.360359   14271 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.360370   14271 pod_ready.go:81] duration metric: took 4.750812ms waiting for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.360384   14271 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.365311   14271 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.365325   14271 pod_ready.go:81] duration metric: took 4.926738ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.365337   14271 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbj6w" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:40.724473   14271 pod_ready.go:92] pod "kube-proxy-xbj6w" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:40.724488   14271 pod_ready.go:81] duration metric: took 1.359119427s waiting for pod "kube-proxy-xbj6w" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:40.724496   14271 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:40.919321   14271 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:40.919331   14271 pod_ready.go:81] duration metric: took 194.825557ms waiting for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:40.919336   14271 pod_ready.go:38] duration metric: took 4.62522339s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:13:40.919356   14271 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:13:40.919409   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:13:40.931035   14271 api_server.go:71] duration metric: took 5.140531013s to wait for apiserver process to appear ...
	I0602 11:13:40.931050   14271 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:13:40.931057   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:13:40.936209   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 200:
	ok
	I0602 11:13:40.937293   14271 api_server.go:140] control plane version: v1.23.6
	I0602 11:13:40.937301   14271 api_server.go:130] duration metric: took 6.246771ms to wait for apiserver health ...
	I0602 11:13:40.937305   14271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:13:41.120621   14271 system_pods.go:59] 9 kube-system pods found
	I0602 11:13:41.120635   14271 system_pods.go:61] "coredns-64897985d-q7f6l" [9348f86d-08db-41f1-a8fa-33f0b74cf0ab] Running
	I0602 11:13:41.120638   14271 system_pods.go:61] "coredns-64897985d-qp56l" [2e9f42d9-06d2-44c5-ab59-2560b50fd5c5] Running
	I0602 11:13:41.120642   14271 system_pods.go:61] "etcd-default-k8s-different-port-20220602110711-2113" [f8512c1a-947d-4506-b868-13343b661686] Running
	I0602 11:13:41.120647   14271 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220602110711-2113" [78222b72-98f6-4017-92ea-655597e0b1e9] Running
	I0602 11:13:41.120651   14271 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220602110711-2113" [52d52399-ee97-41c1-93be-483fe82a7b3b] Running
	I0602 11:13:41.120655   14271 system_pods.go:61] "kube-proxy-xbj6w" [e3405b28-0afd-4a57-b9aa-4c12c8880eee] Running
	I0602 11:13:41.120670   14271 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220602110711-2113" [a8d2d945-2501-4768-8ead-483ebbe19526] Running
	I0602 11:13:41.120677   14271 system_pods.go:61] "metrics-server-b955d9d8-mmzb2" [e28d8ad9-0512-4720-8607-2033e71a4b2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:13:41.120682   14271 system_pods.go:61] "storage-provisioner" [15b1bcd9-2251-4762-bbe4-61e3c8db0e3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:13:41.120686   14271 system_pods.go:74] duration metric: took 183.374077ms to wait for pod list to return data ...
	I0602 11:13:41.120691   14271 default_sa.go:34] waiting for default service account to be created ...
	I0602 11:13:41.318742   14271 default_sa.go:45] found service account: "default"
	I0602 11:13:41.318755   14271 default_sa.go:55] duration metric: took 198.056977ms for default service account to be created ...
	I0602 11:13:41.318760   14271 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 11:13:41.524024   14271 system_pods.go:86] 9 kube-system pods found
	I0602 11:13:41.524043   14271 system_pods.go:89] "coredns-64897985d-q7f6l" [9348f86d-08db-41f1-a8fa-33f0b74cf0ab] Running
	I0602 11:13:41.524050   14271 system_pods.go:89] "coredns-64897985d-qp56l" [2e9f42d9-06d2-44c5-ab59-2560b50fd5c5] Running
	I0602 11:13:41.524056   14271 system_pods.go:89] "etcd-default-k8s-different-port-20220602110711-2113" [f8512c1a-947d-4506-b868-13343b661686] Running
	I0602 11:13:41.524062   14271 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220602110711-2113" [78222b72-98f6-4017-92ea-655597e0b1e9] Running
	I0602 11:13:41.524068   14271 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220602110711-2113" [52d52399-ee97-41c1-93be-483fe82a7b3b] Running
	I0602 11:13:41.524072   14271 system_pods.go:89] "kube-proxy-xbj6w" [e3405b28-0afd-4a57-b9aa-4c12c8880eee] Running
	I0602 11:13:41.524078   14271 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220602110711-2113" [a8d2d945-2501-4768-8ead-483ebbe19526] Running
	I0602 11:13:41.524090   14271 system_pods.go:89] "metrics-server-b955d9d8-mmzb2" [e28d8ad9-0512-4720-8607-2033e71a4b2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:13:41.524098   14271 system_pods.go:89] "storage-provisioner" [15b1bcd9-2251-4762-bbe4-61e3c8db0e3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:13:41.524112   14271 system_pods.go:126] duration metric: took 205.343684ms to wait for k8s-apps to be running ...
	I0602 11:13:41.524125   14271 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 11:13:41.524183   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:13:41.538402   14271 system_svc.go:56] duration metric: took 14.275295ms WaitForService to wait for kubelet.
	I0602 11:13:41.538416   14271 kubeadm.go:572] duration metric: took 5.747903146s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 11:13:41.538436   14271 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:13:41.718284   14271 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:13:41.718296   14271 node_conditions.go:123] node cpu capacity is 6
	I0602 11:13:41.718308   14271 node_conditions.go:105] duration metric: took 179.851499ms to run NodePressure ...
	I0602 11:13:41.718315   14271 start.go:213] waiting for startup goroutines ...
	I0602 11:13:41.748420   14271 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 11:13:41.770365   14271 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220602110711-2113" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:08:16 UTC, end at Thu 2022-06-02 18:14:35 UTC. --
	Jun 02 18:13:01 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:01.004442302Z" level=info msg="ignoring event" container=c9700dfedcf3cbf46adef452c3867768499cbeffd2f8788021d3b1ba5dea6f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:01 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:01.111224039Z" level=info msg="ignoring event" container=5249eac0ee08f0f9d76d5c19684ee4182883c12337432cf249a49aa7965d8caf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:01 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:01.220940681Z" level=info msg="ignoring event" container=51b9f3daee9a10e9052536940024912e32f1bf23ea811d3474c8e7a23bbc2a44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:11 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:11.341976813Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=fe28ec423dfc588f4c91ef67ea093e89e646b9b36106e1a3383c241a4104f1a3
	Jun 02 18:13:11 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:11.372698211Z" level=info msg="ignoring event" container=fe28ec423dfc588f4c91ef67ea093e89e646b9b36106e1a3383c241a4104f1a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:11 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:11.492763172Z" level=info msg="ignoring event" container=b027d25457fadf51caeba229c1421e38ec3d07a7f260c1328ad4e7ae57b8a241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:12 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:12.711909080Z" level=info msg="ignoring event" container=e3f092a4e4b8acf18687dc60526070c9d8d232612cc91cb52cf0731073f02c08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:12 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:12.820503412Z" level=info msg="ignoring event" container=f6fdeb2d2024328e194ac49cf725fd4cd9a0812f07d10b8455a0450e16b6313e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:12 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:12.919784024Z" level=info msg="ignoring event" container=305edf5a3e80c60246fe6dfcc5a28ff8e49202a0a46640dbb798f10250ab3fb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:13 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:13.025058512Z" level=info msg="ignoring event" container=c3d571000adc323d3a9bf6988309cca5abd5bafea930f5331d5ba851a3d93907 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:13 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:13.126153005Z" level=info msg="ignoring event" container=e26bcb8cbcbfcdc81456207f69301a6b88ef553e385ce754f54997b41f58ce4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:13 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:13.249009502Z" level=info msg="ignoring event" container=63db6ec64f911f2d15fee1d3fb9f928cfdf61ac996e2861115beed0a83f23968 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:38 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:38.408686529Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:38 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:38.408729060Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:38 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:38.411073194Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:39 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:39.773797307Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 02 18:13:42 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:42.085885151Z" level=info msg="ignoring event" container=b8347f055fbfe1cddb5e3632fef6cfa8376ccd28bc0da6c84d772dd2384f59e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:42 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:42.174100815Z" level=info msg="ignoring event" container=bbd08bec2896f6c80cd1992ef5444b7ef036a0d97623f6c50c88beed1da58407 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:45 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:45.203257961Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:13:45 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:45.433252897Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:13:48 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:48.488065827Z" level=info msg="ignoring event" container=8b8f98aacf94d6053a9a341c7aed14010e780745199b2aa1c38cea05dafc2c82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:48 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:48.721293917Z" level=info msg="ignoring event" container=19df3a02ce9fabb77b369289713d678fbc579775d435d90d17e2bb50649da0cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:51 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:51.708060537Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:51 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:51.708101761Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:51 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:51.709293217Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	19df3a02ce9fa       a90209bb39e3d                                                                                    47 seconds ago       Exited              dashboard-metrics-scraper   1                   76da2afb3b84e
	b32592514baa8       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   51 seconds ago       Running             kubernetes-dashboard        0                   2da3cbdc53b71
	10fca54a3cf3e       6e38f40d628db                                                                                    57 seconds ago       Running             storage-provisioner         0                   dac1a4c4d3db8
	46ce0d0cc477f       4c03754524064                                                                                    59 seconds ago       Running             kube-proxy                  0                   3e3e43cee22d7
	33c5ea97096cf       a4ca41631cc7a                                                                                    59 seconds ago       Running             coredns                     0                   504e0bb47eb30
	46a270fcaca30       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   1bd2cc75567c8
	700916401ac8b       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   8b2a93e6f922f
	442109d4ab3c3       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   ba6ef3456d1c8
	a1efd30b0df11       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   22c629e5f10ee
	
	* 
	* ==> coredns [33c5ea97096c] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220602110711-2113
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220602110711-2113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=default-k8s-different-port-20220602110711-2113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T11_13_22_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 18:13:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220602110711-2113
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 18:14:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 18:14:33 +0000   Thu, 02 Jun 2022 18:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 18:14:33 +0000   Thu, 02 Jun 2022 18:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 18:14:33 +0000   Thu, 02 Jun 2022 18:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 18:14:33 +0000   Thu, 02 Jun 2022 18:14:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    default-k8s-different-port-20220602110711-2113
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                f4097a03-fe19-4f34-a68b-cf1227538da7
	  Boot ID:                    a475dd08-72ba-4c6d-89c1-75a58adc3783
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-qp56l                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     62s
	  kube-system                 etcd-default-k8s-different-port-20220602110711-2113                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220602110711-2113             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220602110711-2113    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-xbj6w                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220602110711-2113             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 metrics-server-b955d9d8-mmzb2                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-xt4wh                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-hqkxc                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 58s   kube-proxy  
	  Normal  NodeHasSufficientMemory  74s   kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 74s   kubelet     Starting kubelet.
	  Normal  NodeReady                64s   kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeReady
	  Normal  Starting                 3s    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [442109d4ab3c] <==
	* {"level":"info","ts":"2022-06-02T18:13:16.907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-06-02T18:13:16.908Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:default-k8s-different-port-20220602110711-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:13:17.705Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:13:17.705Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T18:13:17.707Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:13:17.707Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  18:14:36 up  1:02,  0 users,  load average: 1.08, 0.88, 1.01
	Linux default-k8s-different-port-20220602110711-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [700916401ac8] <==
	* I0602 18:13:20.222564       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 18:13:20.228285       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0602 18:13:20.230529       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0602 18:13:20.230538       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0602 18:13:20.535613       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 18:13:20.603648       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 18:13:20.704024       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0602 18:13:20.708113       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0602 18:13:20.708826       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 18:13:20.711789       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 18:13:21.354606       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 18:13:22.325496       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 18:13:22.331089       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0602 18:13:22.338882       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 18:13:22.503874       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 18:13:34.192335       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 18:13:35.140615       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 18:13:37.183853       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 18:13:37.495370       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.107.79.208]
	I0602 18:13:38.180001       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.44.30]
	I0602 18:13:38.195066       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.6.255]
	W0602 18:13:38.301513       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:13:38.301612       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:13:38.301619       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [a1efd30b0df1] <==
	* I0602 18:13:35.145860       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xbj6w"
	I0602 18:13:35.299057       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 18:13:35.302554       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-q7f6l"
	I0602 18:13:37.284978       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 18:13:37.291990       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-mmzb2"
	I0602 18:13:38.019546       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 18:13:38.025265       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.059404       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.067535       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	I0602 18:13:38.067582       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.067715       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.068520       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.074696       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0602 18:13:38.075128       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.075147       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.078813       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.078865       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.084264       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.084406       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.086799       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.086849       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:13:38.161725       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-hqkxc"
	I0602 18:13:38.164955       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-xt4wh"
	E0602 18:14:33.141656       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 18:14:33.148987       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [46ce0d0cc477] <==
	* I0602 18:13:37.088700       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:13:37.088745       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:13:37.088792       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:13:37.180591       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:13:37.180612       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:13:37.180616       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:13:37.180652       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:13:37.180991       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:13:37.181673       1 config.go:317] "Starting service config controller"
	I0602 18:13:37.181680       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:13:37.181829       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:13:37.181950       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:13:37.282107       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 18:13:37.282173       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [46a270fcaca3] <==
	* W0602 18:13:19.289692       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 18:13:19.289826       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 18:13:19.289997       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 18:13:19.290052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 18:13:19.290002       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 18:13:19.290066       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 18:13:20.197528       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:13:20.197578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:13:20.199642       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:13:20.199741       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:13:20.223687       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 18:13:20.223803       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0602 18:13:20.232582       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 18:13:20.232617       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 18:13:20.234192       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 18:13:20.234224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 18:13:20.371321       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 18:13:20.371371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 18:13:20.432041       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0602 18:13:20.432076       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0602 18:13:20.563538       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 18:13:20.563572       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 18:13:20.691881       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0602 18:13:21.368883       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0602 18:13:23.086818       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:08:16 UTC, end at Thu 2022-06-02 18:14:36 UTC. --
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.652120    7110 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.652176    7110 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.652248    7110 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.652534    7110 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.652615    7110 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671837    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e3405b28-0afd-4a57-b9aa-4c12c8880eee-kube-proxy\") pod \"kube-proxy-xbj6w\" (UID: \"e3405b28-0afd-4a57-b9aa-4c12c8880eee\") " pod="kube-system/kube-proxy-xbj6w"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671889    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4a581662-4b96-4aff-a293-48be5f24767e-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-xt4wh\" (UID: \"4a581662-4b96-4aff-a293-48be5f24767e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-xt4wh"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671908    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5sm\" (UniqueName: \"kubernetes.io/projected/4a581662-4b96-4aff-a293-48be5f24767e-kube-api-access-cd5sm\") pod \"dashboard-metrics-scraper-56974995fc-xt4wh\" (UID: \"4a581662-4b96-4aff-a293-48be5f24767e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-xt4wh"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671923    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrsgf\" (UniqueName: \"kubernetes.io/projected/2b45dde7-82b4-439a-b822-381b15db860e-kube-api-access-wrsgf\") pod \"kubernetes-dashboard-cd7c84bfc-hqkxc\" (UID: \"2b45dde7-82b4-439a-b822-381b15db860e\") " pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-hqkxc"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671946    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e28d8ad9-0512-4720-8607-2033e71a4b2b-tmp-dir\") pod \"metrics-server-b955d9d8-mmzb2\" (UID: \"e28d8ad9-0512-4720-8607-2033e71a4b2b\") " pod="kube-system/metrics-server-b955d9d8-mmzb2"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671963    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qllf\" (UniqueName: \"kubernetes.io/projected/2e9f42d9-06d2-44c5-ab59-2560b50fd5c5-kube-api-access-7qllf\") pod \"coredns-64897985d-qp56l\" (UID: \"2e9f42d9-06d2-44c5-ab59-2560b50fd5c5\") " pod="kube-system/coredns-64897985d-qp56l"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671977    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49qkd\" (UniqueName: \"kubernetes.io/projected/e28d8ad9-0512-4720-8607-2033e71a4b2b-kube-api-access-49qkd\") pod \"metrics-server-b955d9d8-mmzb2\" (UID: \"e28d8ad9-0512-4720-8607-2033e71a4b2b\") " pod="kube-system/metrics-server-b955d9d8-mmzb2"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671992    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15b1bcd9-2251-4762-bbe4-61e3c8db0e3c-tmp\") pod \"storage-provisioner\" (UID: \"15b1bcd9-2251-4762-bbe4-61e3c8db0e3c\") " pod="kube-system/storage-provisioner"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672009    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64s7\" (UniqueName: \"kubernetes.io/projected/15b1bcd9-2251-4762-bbe4-61e3c8db0e3c-kube-api-access-r64s7\") pod \"storage-provisioner\" (UID: \"15b1bcd9-2251-4762-bbe4-61e3c8db0e3c\") " pod="kube-system/storage-provisioner"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672023    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3405b28-0afd-4a57-b9aa-4c12c8880eee-xtables-lock\") pod \"kube-proxy-xbj6w\" (UID: \"e3405b28-0afd-4a57-b9aa-4c12c8880eee\") " pod="kube-system/kube-proxy-xbj6w"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672039    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hkpv\" (UniqueName: \"kubernetes.io/projected/e3405b28-0afd-4a57-b9aa-4c12c8880eee-kube-api-access-7hkpv\") pod \"kube-proxy-xbj6w\" (UID: \"e3405b28-0afd-4a57-b9aa-4c12c8880eee\") " pod="kube-system/kube-proxy-xbj6w"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672053    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3405b28-0afd-4a57-b9aa-4c12c8880eee-lib-modules\") pod \"kube-proxy-xbj6w\" (UID: \"e3405b28-0afd-4a57-b9aa-4c12c8880eee\") " pod="kube-system/kube-proxy-xbj6w"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672066    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2b45dde7-82b4-439a-b822-381b15db860e-tmp-volume\") pod \"kubernetes-dashboard-cd7c84bfc-hqkxc\" (UID: \"2b45dde7-82b4-439a-b822-381b15db860e\") " pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-hqkxc"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672079    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e9f42d9-06d2-44c5-ab59-2560b50fd5c5-config-volume\") pod \"coredns-64897985d-qp56l\" (UID: \"2e9f42d9-06d2-44c5-ab59-2560b50fd5c5\") " pod="kube-system/coredns-64897985d-qp56l"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672088    7110 reconciler.go:157] "Reconciler: start to sync state"
	Jun 02 18:14:35 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:35.847702    7110 request.go:665] Waited for 1.152339589s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jun 02 18:14:35 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:35.869275    7110 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220602110711-2113\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220602110711-2113"
	Jun 02 18:14:36 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:36.052627    7110 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220602110711-2113\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220602110711-2113"
	Jun 02 18:14:36 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:36.252529    7110 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220602110711-2113\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220602110711-2113"
	Jun 02 18:14:36 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:36.469264    7110 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220602110711-2113\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220602110711-2113"
	
	* 
	* ==> kubernetes-dashboard [b32592514baa] <==
	* 2022/06/02 18:13:44 Starting overwatch
	2022/06/02 18:13:44 Using namespace: kubernetes-dashboard
	2022/06/02 18:13:44 Using in-cluster config to connect to apiserver
	2022/06/02 18:13:44 Using secret token for csrf signing
	2022/06/02 18:13:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/02 18:13:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/02 18:13:44 Successful initial request to the apiserver, version: v1.23.6
	2022/06/02 18:13:44 Generating JWE encryption key
	2022/06/02 18:13:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/02 18:13:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/02 18:13:45 Initializing JWE encryption key from synchronized object
	2022/06/02 18:13:45 Creating in-cluster Sidecar client
	2022/06/02 18:13:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:13:45 Serving insecurely on HTTP port: 9090
	2022/06/02 18:14:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [10fca54a3cf3] <==
	* I0602 18:13:38.502207       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:13:38.510010       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:13:38.510057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:13:38.514816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:13:38.515010       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220602110711-2113_79550121-4034-4072-a2ef-c0cb066261bf!
	I0602 18:13:38.515376       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a4013fb-89b6-4394-9436-841beb6e1d6b", APIVersion:"v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220602110711-2113_79550121-4034-4072-a2ef-c0cb066261bf became leader
	I0602 18:13:38.615985       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220602110711-2113_79550121-4034-4072-a2ef-c0cb066261bf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220602110711-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-mmzb2
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220602110711-2113 describe pod metrics-server-b955d9d8-mmzb2
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220602110711-2113 describe pod metrics-server-b955d9d8-mmzb2: exit status 1 (333.903988ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-mmzb2" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220602110711-2113 describe pod metrics-server-b955d9d8-mmzb2: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220602110711-2113
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220602110711-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5",
	        "Created": "2022-06-02T18:07:17.909147477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222470,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:08:16.779614085Z",
	            "FinishedAt": "2022-06-02T18:08:14.847555704Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5/hostname",
	        "HostsPath": "/var/lib/docker/containers/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5/hosts",
	        "LogPath": "/var/lib/docker/containers/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5/15da6650a4b717cc84de1a8ff14b95f23c483fa6df765351eab5f9f831f1fbb5-json.log",
	        "Name": "/default-k8s-different-port-20220602110711-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602110711-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602110711-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/749671fe85bbaa8f0cf9f29b9ab3d51afdbf4dae17a1592400c7a586e732a945-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/749671fe85bbaa8f0cf9f29b9ab3d51afdbf4dae17a1592400c7a586e732a945/merged",
	                "UpperDir": "/var/lib/docker/overlay2/749671fe85bbaa8f0cf9f29b9ab3d51afdbf4dae17a1592400c7a586e732a945/diff",
	                "WorkDir": "/var/lib/docker/overlay2/749671fe85bbaa8f0cf9f29b9ab3d51afdbf4dae17a1592400c7a586e732a945/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602110711-2113",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602110711-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602110711-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602110711-2113",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602110711-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb30e712988b82480f56a77f37ed83f25e19054ed8c00505f88cc31c5c7055e7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52980"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52981"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52982"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52983"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/eb30e712988b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602110711-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "15da6650a4b7",
	                        "default-k8s-different-port-20220602110711-2113"
	                    ],
	                    "NetworkID": "fe40b6b9d189fb34bb611388ce54fac245dc51e55f85ea4b41021b7f6808cdc7",
	                    "EndpointID": "d3bda36e318da8f19ed8e633b041bed1c31187a0fb70a3e015d79abfbb51e6ab",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220602110711-2113 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220602110711-2113 logs -n 25: (2.691531717s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | disable-driver-mounts-20220602105918-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 10:59 PDT |
	|         | disable-driver-mounts-20220602105918-2113         |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:59 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:00 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:04 PDT | 02 Jun 22 11:04 PDT |
	|         | old-k8s-version-20220602105906-2113               |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:00 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:06 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | no-preload-20220602105919-2113                    | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220602105919-2113                    | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                    |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220602105906-2113               | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:12 PDT | 02 Jun 22 11:13 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220602110711-2113    | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:08:15
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:08:15.517716   14271 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:08:15.517914   14271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:08:15.517920   14271 out.go:309] Setting ErrFile to fd 2...
	I0602 11:08:15.517924   14271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:08:15.518039   14271 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:08:15.518296   14271 out.go:303] Setting JSON to false
	I0602 11:08:15.533877   14271 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4064,"bootTime":1654189231,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:08:15.534006   14271 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:08:15.555791   14271 out.go:177] * [default-k8s-different-port-20220602110711-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:08:15.597880   14271 notify.go:193] Checking for updates...
	I0602 11:08:15.619617   14271 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:08:15.640808   14271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:08:15.661783   14271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:08:15.682595   14271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:08:15.703785   14271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:08:15.725094   14271 config.go:178] Loaded profile config "default-k8s-different-port-20220602110711-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:08:15.725430   14271 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:08:15.796906   14271 docker.go:137] docker version: linux-20.10.14
	I0602 11:08:15.797053   14271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:08:15.922561   14271 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:08:15.86746037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:08:15.966236   14271 out.go:177] * Using the docker driver based on existing profile
	I0602 11:08:15.988390   14271 start.go:284] selected driver: docker
	I0602 11:08:15.988424   14271 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220602110711-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220602110711-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:08:15.988564   14271 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:08:15.991998   14271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:08:16.114994   14271 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:08:16.062502247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:08:16.115182   14271 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:08:16.115205   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:08:16.115214   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:08:16.115223   14271 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220602110711-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602110711-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:08:16.158958   14271 out.go:177] * Starting control plane node default-k8s-different-port-20220602110711-2113 in cluster default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.181099   14271 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:08:16.202847   14271 out.go:177] * Pulling base image ...
	I0602 11:08:16.244857   14271 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:08:16.244892   14271 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:08:16.244926   14271 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 11:08:16.244951   14271 cache.go:57] Caching tarball of preloaded images
	I0602 11:08:16.245139   14271 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:08:16.245160   14271 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 11:08:16.246083   14271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/config.json ...
	I0602 11:08:16.310676   14271 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:08:16.310691   14271 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:08:16.310699   14271 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:08:16.310742   14271 start.go:352] acquiring machines lock for default-k8s-different-port-20220602110711-2113: {Name:mk5c32f64296c6672223bdc5496081160863f257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:08:16.310822   14271 start.go:356] acquired machines lock for "default-k8s-different-port-20220602110711-2113" in 60.649µs
	I0602 11:08:16.310842   14271 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:08:16.310853   14271 fix.go:55] fixHost starting: 
	I0602 11:08:16.311066   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:08:16.377507   14271 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220602110711-2113: state=Stopped err=<nil>
	W0602 11:08:16.377551   14271 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:08:16.399302   14271 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220602110711-2113" ...
	I0602 11:08:16.420479   14271 cli_runner.go:164] Run: docker start default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.774466   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:08:16.847223   14271 kic.go:416] container "default-k8s-different-port-20220602110711-2113" state is running.
	I0602 11:08:16.847828   14271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.920874   14271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/config.json ...
	I0602 11:08:16.921257   14271 machine.go:88] provisioning docker machine ...
	I0602 11:08:16.921280   14271 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220602110711-2113"
	I0602 11:08:16.921351   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:16.993938   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:16.994122   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:16.994150   14271 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220602110711-2113 && echo "default-k8s-different-port-20220602110711-2113" | sudo tee /etc/hostname
	I0602 11:08:17.119677   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220602110711-2113
	
	I0602 11:08:17.119769   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.193462   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:17.193625   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:17.193641   14271 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220602110711-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220602110711-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220602110711-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:08:17.313470   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:08:17.313494   14271 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:08:17.313514   14271 ubuntu.go:177] setting up certificates
	I0602 11:08:17.313526   14271 provision.go:83] configureAuth start
	I0602 11:08:17.313600   14271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.386535   14271 provision.go:138] copyHostCerts
	I0602 11:08:17.386632   14271 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:08:17.386642   14271 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:08:17.386747   14271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:08:17.386997   14271 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:08:17.387004   14271 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:08:17.387064   14271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:08:17.387225   14271 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:08:17.387231   14271 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:08:17.387292   14271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:08:17.387411   14271 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220602110711-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220602110711-2113]
	I0602 11:08:17.434515   14271 provision.go:172] copyRemoteCerts
	I0602 11:08:17.434580   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:08:17.434625   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.506502   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:17.593925   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:08:17.614967   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0602 11:08:17.637005   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:08:17.658235   14271 provision.go:86] duration metric: configureAuth took 344.691133ms
	I0602 11:08:17.658249   14271 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:08:17.658395   14271 config.go:178] Loaded profile config "default-k8s-different-port-20220602110711-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:08:17.658448   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.730610   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:17.730757   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:17.730766   14271 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:08:17.850560   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:08:17.850583   14271 ubuntu.go:71] root file system type: overlay
	I0602 11:08:17.850750   14271 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:08:17.850832   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:17.922108   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:17.922253   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:17.922301   14271 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:08:18.046181   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:08:18.046271   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.117615   14271 main.go:134] libmachine: Using SSH client type: native
	I0602 11:08:18.117752   14271 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52979 <nil> <nil>}
	I0602 11:08:18.117764   14271 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:08:18.238940   14271 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:08:18.238960   14271 machine.go:91] provisioned docker machine in 1.317671465s
	I0602 11:08:18.238969   14271 start.go:306] post-start starting for "default-k8s-different-port-20220602110711-2113" (driver="docker")
	I0602 11:08:18.238974   14271 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:08:18.239040   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:08:18.239086   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.309021   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.395195   14271 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:08:18.398736   14271 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:08:18.398753   14271 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:08:18.398761   14271 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:08:18.398769   14271 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:08:18.398779   14271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:08:18.398885   14271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:08:18.399033   14271 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:08:18.399193   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:08:18.406089   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:08:18.423802   14271 start.go:309] post-start completed in 184.82013ms
	I0602 11:08:18.423883   14271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:08:18.423931   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.493419   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.577352   14271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:08:18.582028   14271 fix.go:57] fixHost completed within 2.271136565s
	I0602 11:08:18.582039   14271 start.go:81] releasing machines lock for "default-k8s-different-port-20220602110711-2113", held for 2.271170149s
	I0602 11:08:18.582108   14271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.652251   14271 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:08:18.652251   14271 ssh_runner.go:195] Run: systemctl --version
	I0602 11:08:18.652335   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.652339   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:18.729373   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.731038   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:08:18.813622   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:08:18.943560   14271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:08:18.954030   14271 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:08:18.954084   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:08:18.963406   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:08:18.976091   14271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:08:19.040894   14271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:08:19.108714   14271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:08:19.118700   14271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:08:19.185811   14271 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:08:19.195192   14271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:08:19.228635   14271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:08:15.221956   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:15.271807   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.271819   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:15.271873   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:15.303439   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.303452   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:15.303518   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:15.333961   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.333988   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:15.334084   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:15.364875   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.364888   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:15.364950   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:15.395700   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.395712   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:15.395765   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:15.424510   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.424520   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:15.424572   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:15.453415   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.453428   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:15.453493   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:15.483708   13778 logs.go:274] 0 containers: []
	W0602 11:08:15.483719   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:15.483724   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:15.483730   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:15.538743   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:15.538752   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:15.538758   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:15.550783   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:15.550794   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:17.605845   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055003078s)
	I0602 11:08:17.605979   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:17.605988   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:17.649331   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:17.649353   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:20.164014   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:19.305934   14271 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 11:08:19.306113   14271 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220602110711-2113 dig +short host.docker.internal
	I0602 11:08:19.446242   14271 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:08:19.446326   14271 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:08:19.450862   14271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:08:19.460634   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:19.531276   14271 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:08:19.531337   14271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:08:19.561235   14271 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:08:19.561251   14271 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:08:19.561312   14271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:08:19.591189   14271 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:08:19.591211   14271 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:08:19.591282   14271 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:08:19.665013   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:08:19.665024   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:08:19.665044   14271 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:08:19.665056   14271 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220602110711-2113 NodeName:default-k8s-different-port-20220602110711-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 Cgroup
Driver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:08:19.665176   14271 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220602110711-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:08:19.665248   14271 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220602110711-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602110711-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0602 11:08:19.665304   14271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 11:08:19.673262   14271 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:08:19.673322   14271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:08:19.680190   14271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0602 11:08:19.692477   14271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:08:19.704606   14271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0602 11:08:19.717011   14271 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:08:19.720737   14271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:08:19.730066   14271 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113 for IP: 192.168.58.2
	I0602 11:08:19.730171   14271 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:08:19.730221   14271 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:08:19.730312   14271 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.key
	I0602 11:08:19.730378   14271 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/apiserver.key.cee25041
	I0602 11:08:19.730457   14271 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/proxy-client.key
	I0602 11:08:19.730674   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:08:19.730711   14271 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:08:19.730724   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:08:19.730754   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:08:19.730789   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:08:19.730822   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:08:19.730884   14271 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:08:19.731420   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:08:19.748043   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 11:08:19.764498   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:08:19.781157   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:08:19.797871   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:08:19.814159   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:08:19.830887   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:08:19.848080   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:08:19.865456   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:08:19.881698   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:08:19.898483   14271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:08:19.914958   14271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:08:19.927686   14271 ssh_runner.go:195] Run: openssl version
	I0602 11:08:19.932835   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:08:19.940543   14271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:08:19.944572   14271 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:08:19.944611   14271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:08:19.949643   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:08:19.956574   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:08:19.964137   14271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:08:19.967898   14271 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:08:19.967937   14271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:08:19.973115   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:08:19.980514   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:08:19.988285   14271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:08:19.991947   14271 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:08:19.991984   14271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:08:19.997046   14271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:08:20.004017   14271 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220602110711-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602110711-2113
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:08:20.004132   14271 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:08:20.033806   14271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:08:20.041165   14271 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:08:20.041189   14271 kubeadm.go:626] restartCluster start
	I0602 11:08:20.041238   14271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:08:20.047947   14271 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.047999   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:08:20.119320   14271 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220602110711-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:08:20.119501   14271 kubeconfig.go:127] "default-k8s-different-port-20220602110711-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:08:20.119891   14271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:08:20.121169   14271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:08:20.128818   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.128866   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.140758   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.341344   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.341425   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.350851   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.221322   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:20.272710   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.272723   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:20.272780   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:20.303113   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.303125   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:20.303179   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:20.332713   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.332726   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:20.332786   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:20.363526   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.363541   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:20.363604   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:20.393277   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.393290   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:20.393345   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:20.423123   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.423136   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:20.423189   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:20.452818   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.452831   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:20.452894   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:20.482672   13778 logs.go:274] 0 containers: []
	W0602 11:08:20.482685   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:20.482691   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:20.482699   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:20.537779   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:20.537790   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:20.537797   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:20.551744   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:20.551756   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:22.603781   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051975725s)
	I0602 11:08:22.603889   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:22.603895   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:22.641201   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:22.641214   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:25.154798   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:20.540903   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.541022   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.549461   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.740967   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.741117   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.752173   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:20.940840   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:20.940902   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:20.949819   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.142949   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.143091   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.153503   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.341193   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.341297   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.352208   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.542948   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.543068   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.553688   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.742445   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.742610   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.752897   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:21.941532   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:21.941622   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:21.952125   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.143019   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.143112   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.154053   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.342959   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.343122   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.354067   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.541852   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.541959   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.552227   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.743005   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.743174   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.753673   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:22.941169   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:22.941282   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:22.951571   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.143019   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:23.143121   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:23.154033   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.154043   14271 api_server.go:165] Checking apiserver status ...
	I0602 11:08:23.154095   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:08:23.162400   14271 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.162410   14271 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:08:23.162418   14271 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:08:23.162473   14271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:08:23.192549   14271 docker.go:442] Stopping containers: [5424fc41e82a 5f5b0dd7b333 f35280654931 b9a9032aa6a0 5f2b057e31f6 0a04721ed918 e3c1dd0cd3c0 d432e94b8645 553b06952827 41e494ce31b3 947af7b50e63 059f7d232752 d3a03a2fc0b9 bf8a809c5a96 cff10caa9374 680bea8fcf84]
	I0602 11:08:23.192630   14271 ssh_runner.go:195] Run: docker stop 5424fc41e82a 5f5b0dd7b333 f35280654931 b9a9032aa6a0 5f2b057e31f6 0a04721ed918 e3c1dd0cd3c0 d432e94b8645 553b06952827 41e494ce31b3 947af7b50e63 059f7d232752 d3a03a2fc0b9 bf8a809c5a96 cff10caa9374 680bea8fcf84
	I0602 11:08:23.222876   14271 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:08:23.233125   14271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:08:23.240768   14271 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  2 18:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  2 18:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  2 18:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  2 18:07 /etc/kubernetes/scheduler.conf
	
	I0602 11:08:23.240824   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0602 11:08:23.248274   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0602 11:08:23.255564   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0602 11:08:23.263617   14271 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.263680   14271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 11:08:23.270956   14271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0602 11:08:23.278150   14271 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:08:23.278193   14271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 11:08:23.284827   14271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:08:23.292008   14271 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:08:23.292025   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:23.336140   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.189152   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.321146   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.367977   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:24.415440   14271 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:08:24.415503   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:24.926339   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:25.424317   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:25.476098   14271 api_server.go:71] duration metric: took 1.060645549s to wait for apiserver process to appear ...
	I0602 11:08:25.476124   14271 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:08:25.476138   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:25.477296   14271 api_server.go:256] stopped: https://127.0.0.1:52983/healthz: Get "https://127.0.0.1:52983/healthz": EOF
	I0602 11:08:25.221414   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:25.296178   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.296191   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:25.296260   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:25.329053   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.329071   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:25.329164   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:25.357741   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.357752   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:25.357810   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:25.390667   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.390682   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:25.390741   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:25.437576   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.437588   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:25.437644   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:25.466359   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.466375   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:25.466456   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:25.502948   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.502962   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:25.503019   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:25.538129   13778 logs.go:274] 0 containers: []
	W0602 11:08:25.538146   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:25.538154   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:25.538162   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:25.582011   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:25.582029   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:25.595600   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:25.595615   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:25.652328   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:25.652345   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:25.652351   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:25.665370   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:25.665381   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:27.726129   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060700298s)
	I0602 11:08:25.977412   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:27.865104   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 11:08:27.865120   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 11:08:27.978216   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:27.984906   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:08:27.984929   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:08:28.477488   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:28.484388   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:08:28.484405   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:08:28.977988   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:28.983267   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:08:28.983291   14271 api_server.go:102] status: https://127.0.0.1:52983/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:08:29.478044   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:08:29.483906   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 200:
	ok
	I0602 11:08:29.490553   14271 api_server.go:140] control plane version: v1.23.6
	I0602 11:08:29.490564   14271 api_server.go:130] duration metric: took 4.014365072s to wait for apiserver health ...
	I0602 11:08:29.490572   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:08:29.490579   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:08:29.490591   14271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:08:29.498298   14271 system_pods.go:59] 8 kube-system pods found
	I0602 11:08:29.498313   14271 system_pods.go:61] "coredns-64897985d-h47dc" [7accc8c2-babb-4fb2-a915-34bdcaf81942] Running
	I0602 11:08:29.498323   14271 system_pods.go:61] "etcd-default-k8s-different-port-20220602110711-2113" [9a73a84a-8a22-4366-a66d-df315295a7a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 11:08:29.498328   14271 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220602110711-2113" [c11ca282-ae9e-4bb4-9517-d6c8bd9deab8] Running
	I0602 11:08:29.498333   14271 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220602110711-2113" [f8bd0bd0-acca-48d9-8f9f-33abf2cb6de2] Running
	I0602 11:08:29.498337   14271 system_pods.go:61] "kube-proxy-jrk2q" [7fa38b28-1f8b-4ef3-9983-3724a52b8b00] Running
	I0602 11:08:29.498341   14271 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220602110711-2113" [5fa1cd09-e48e-465c-8a2c-fc11ab91bb5d] Running
	I0602 11:08:29.498348   14271 system_pods.go:61] "metrics-server-b955d9d8-lnk7h" [a26e7c1f-21ad-400e-9ea2-7d626d72922d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:08:29.498356   14271 system_pods.go:61] "storage-provisioner" [1e7818f7-f246-4230-bd2a-1013266312d3] Running
	I0602 11:08:29.498361   14271 system_pods.go:74] duration metric: took 7.764866ms to wait for pod list to return data ...
	I0602 11:08:29.498367   14271 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:08:29.501391   14271 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:08:29.501404   14271 node_conditions.go:123] node cpu capacity is 6
	I0602 11:08:29.501415   14271 node_conditions.go:105] duration metric: took 3.043692ms to run NodePressure ...
	I0602 11:08:29.501426   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:08:29.615914   14271 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0602 11:08:29.619660   14271 kubeadm.go:777] kubelet initialised
	I0602 11:08:29.619670   14271 kubeadm.go:778] duration metric: took 3.743155ms waiting for restarted kubelet to initialise ...
	I0602 11:08:29.619678   14271 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:08:29.624145   14271 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-h47dc" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:29.628299   14271 pod_ready.go:92] pod "coredns-64897985d-h47dc" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:29.628307   14271 pod_ready.go:81] duration metric: took 4.151112ms waiting for pod "coredns-64897985d-h47dc" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:29.628314   14271 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:30.226574   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:30.721539   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:30.759508   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.759521   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:30.759579   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:30.792623   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.792637   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:30.792712   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:30.822014   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.822028   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:30.822086   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:30.851154   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.851168   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:30.851240   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:30.880918   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.880931   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:30.880986   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:30.910502   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.910515   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:30.910577   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:30.941645   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.941657   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:30.941714   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:30.972909   13778 logs.go:274] 0 containers: []
	W0602 11:08:30.972921   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:30.972928   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:30.972934   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:30.984875   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:30.984888   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:31.040921   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:31.040935   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:31.040942   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:31.053333   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:31.053346   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:33.107850   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05445655s)
	I0602 11:08:33.107952   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:33.107959   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:31.641210   14271 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:33.641265   14271 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:35.650135   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:35.721787   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:35.751661   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.751673   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:35.751730   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:35.780322   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.780334   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:35.780393   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:35.809983   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.809996   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:35.810052   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:35.838069   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.838081   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:35.838140   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:35.866612   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.866629   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:35.866713   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:35.897341   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.897354   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:35.897409   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:35.928444   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.928456   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:35.928513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:35.956497   13778 logs.go:274] 0 containers: []
	W0602 11:08:35.956510   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:35.956517   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:35.956524   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:35.969093   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:35.969108   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:38.024274   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055118179s)
	I0602 11:08:38.024385   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:38.024393   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:38.064021   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:38.064037   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:38.075931   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:38.075944   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:38.130990   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:35.642462   14271 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:36.642462   14271 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:36.642475   14271 pod_ready.go:81] duration metric: took 7.014033821s waiting for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:36.642481   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:38.655878   14271 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:40.632494   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:40.722073   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:40.750220   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.750232   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:40.750297   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:40.778245   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.778256   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:40.778304   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:40.807262   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.807273   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:40.807333   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:40.836172   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.836183   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:40.836239   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:40.864838   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.864850   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:40.864906   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:40.893840   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.893852   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:40.893910   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:40.923704   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.923715   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:40.923773   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:40.951957   13778 logs.go:274] 0 containers: []
	W0602 11:08:40.951970   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:40.951978   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:40.951986   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:41.004848   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:41.004859   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:41.004865   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:41.017334   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:41.017346   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:43.066770   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0493766s)
	I0602 11:08:43.066886   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:43.066894   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:43.107798   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:43.107814   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:41.154674   14271 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:43.156222   14271 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:43.156234   14271 pod_ready.go:81] duration metric: took 6.513634404s waiting for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:43.156241   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.668817   14271 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:44.668829   14271 pod_ready.go:81] duration metric: took 1.512556931s waiting for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.668835   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrk2q" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.673173   14271 pod_ready.go:92] pod "kube-proxy-jrk2q" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:44.673180   14271 pod_ready.go:81] duration metric: took 4.340525ms waiting for pod "kube-proxy-jrk2q" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.673186   14271 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.677163   14271 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:08:44.677170   14271 pod_ready.go:81] duration metric: took 3.980246ms waiting for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:44.677176   14271 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace to be "Ready" ...
	I0602 11:08:45.621045   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:45.722513   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:45.753852   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.753863   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:45.753920   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:45.782032   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.782044   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:45.782103   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:45.811660   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.811672   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:45.811730   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:45.841102   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.841115   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:45.841176   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:45.869555   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.869568   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:45.869625   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:45.896999   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.897011   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:45.897079   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:45.925033   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.925045   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:45.925100   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:45.955532   13778 logs.go:274] 0 containers: []
	W0602 11:08:45.955543   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:45.955550   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:45.955556   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:45.994815   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:45.994828   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:46.006706   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:46.006718   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:46.059309   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:46.059318   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:46.059325   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:46.071706   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:46.071719   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:48.125554   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053788045s)
	I0602 11:08:46.690067   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:49.192051   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:50.627972   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:50.722301   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:50.752680   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.752693   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:50.752749   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:50.781019   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.781032   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:50.781090   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:50.810077   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.810088   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:50.810152   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:50.839097   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.839108   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:50.839164   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:50.870493   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.870504   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:50.870560   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:50.899156   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.899168   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:50.899224   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:50.927401   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.927413   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:50.927469   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:50.970889   13778 logs.go:274] 0 containers: []
	W0602 11:08:50.970901   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:50.970908   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:50.970915   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:51.026070   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:51.026080   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:51.026086   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:51.037940   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:51.037952   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:53.091015   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053015843s)
	I0602 11:08:53.091123   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:53.091130   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:53.130767   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:53.130781   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:51.688335   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:53.689175   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:55.642775   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:08:55.722143   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:08:55.752596   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.752608   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:08:55.752663   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:08:55.781383   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.781395   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:08:55.781453   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:08:55.810740   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.810751   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:08:55.810806   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:08:55.839025   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.839037   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:08:55.839092   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:08:55.868111   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.868123   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:08:55.868185   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:08:55.896365   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.896376   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:08:55.896436   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:08:55.925240   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.925252   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:08:55.925308   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:08:55.954351   13778 logs.go:274] 0 containers: []
	W0602 11:08:55.954362   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:08:55.954370   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:08:55.954377   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:08:55.994349   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:08:55.994360   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:08:56.006541   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:08:56.006553   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:08:56.060230   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0602 11:08:56.060240   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:08:56.060246   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:08:56.072372   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:08:56.072385   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:08:58.126471   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054039162s)
	I0602 11:08:56.187836   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:08:58.190416   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:00.626897   13778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:09:00.636995   13778 kubeadm.go:630] restartCluster took 4m5.698955011s
	W0602 11:09:00.637074   13778 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0602 11:09:00.637089   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:09:01.056935   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:09:01.066336   13778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:09:01.073784   13778 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:09:01.073830   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:09:01.081072   13778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:09:01.081099   13778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:09:01.817978   13778 out.go:204]   - Generating certificates and keys ...
	I0602 11:09:02.504280   13778 out.go:204]   - Booting up control plane ...
	I0602 11:09:00.687408   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:02.689765   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:04.689850   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:07.189249   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:09.190335   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:11.691237   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:14.187781   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:16.190080   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:18.687798   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:20.690432   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:23.187958   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:25.190427   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:27.687964   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:29.691339   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:32.188132   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:34.189396   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:36.189672   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:38.689846   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:41.188841   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:43.189653   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:45.190339   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:47.690415   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:50.188091   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:52.191824   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:54.690834   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:56.691875   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:09:59.189437   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:01.190943   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:03.191954   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:05.692452   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:07.692576   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:10.189968   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:12.690983   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:15.188184   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:17.189909   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:19.688905   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:21.691564   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:24.190443   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:26.690498   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:28.691268   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:31.190793   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:33.191155   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:35.690951   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:37.692551   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:40.193163   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:42.691386   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:44.692387   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:46.692685   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:49.193533   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:51.691604   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:53.693237   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	W0602 11:10:57.423207   13778 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0602 11:10:57.423236   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:10:57.840204   13778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:10:57.849925   13778 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:10:57.849972   13778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:10:57.857794   13778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:10:57.857811   13778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:10:58.606461   13778 out.go:204]   - Generating certificates and keys ...
	I0602 11:10:59.124567   13778 out.go:204]   - Booting up control plane ...
	I0602 11:10:56.192552   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:10:58.689473   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:00.693155   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:03.193549   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:05.194270   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:07.693653   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:10.192674   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:12.691715   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:14.691808   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:17.191371   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:19.193132   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:21.193202   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:23.691940   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:25.692807   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:27.692954   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:30.191988   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:32.194025   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:34.692688   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:36.692994   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:38.693797   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:41.193247   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:43.693628   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:45.694558   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:48.191576   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:50.193727   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:52.194036   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:54.194247   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:56.694218   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:11:59.193493   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:01.194007   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:03.194607   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:05.693468   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:07.693608   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:09.695228   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:12.194703   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:14.693976   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:17.192125   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:19.194163   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:21.194395   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:23.693999   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:26.191617   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:28.194216   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:30.694582   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:33.193720   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:35.694487   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:38.194086   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:40.693116   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:42.693433   14271 pod_ready.go:102] pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace has status "Ready":"False"
	I0602 11:12:44.686833   14271 pod_ready.go:81] duration metric: took 4m0.005479685s waiting for pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace to be "Ready" ...
	E0602 11:12:44.686847   14271 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-lnk7h" in "kube-system" namespace to be "Ready" (will not retry!)
	I0602 11:12:44.686859   14271 pod_ready.go:38] duration metric: took 4m15.062761979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:12:44.686881   14271 kubeadm.go:630] restartCluster took 4m24.641108189s
	W0602 11:12:44.686956   14271 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0602 11:12:44.686973   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:12:54.041678   13778 kubeadm.go:397] StartCluster complete in 7m59.136004493s
	I0602 11:12:54.041759   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0602 11:12:54.071372   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.071384   13778 logs.go:276] No container was found matching "kube-apiserver"
	I0602 11:12:54.071441   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0602 11:12:54.100053   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.100066   13778 logs.go:276] No container was found matching "etcd"
	I0602 11:12:54.100125   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0602 11:12:54.128275   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.128286   13778 logs.go:276] No container was found matching "coredns"
	I0602 11:12:54.128343   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0602 11:12:54.157653   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.157665   13778 logs.go:276] No container was found matching "kube-scheduler"
	I0602 11:12:54.157722   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0602 11:12:54.187430   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.187443   13778 logs.go:276] No container was found matching "kube-proxy"
	I0602 11:12:54.187496   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0602 11:12:54.215461   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.215472   13778 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0602 11:12:54.215526   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0602 11:12:54.244945   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.244956   13778 logs.go:276] No container was found matching "storage-provisioner"
	I0602 11:12:54.245011   13778 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0602 11:12:54.274697   13778 logs.go:274] 0 containers: []
	W0602 11:12:54.274709   13778 logs.go:276] No container was found matching "kube-controller-manager"
	I0602 11:12:54.274716   13778 logs.go:123] Gathering logs for Docker ...
	I0602 11:12:54.274725   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0602 11:12:54.287581   13778 logs.go:123] Gathering logs for container status ...
	I0602 11:12:54.287595   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0602 11:12:56.340056   13778 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052413965s)
	I0602 11:12:56.340164   13778 logs.go:123] Gathering logs for kubelet ...
	I0602 11:12:56.340171   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0602 11:12:56.380800   13778 logs.go:123] Gathering logs for dmesg ...
	I0602 11:12:56.380813   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0602 11:12:56.392375   13778 logs.go:123] Gathering logs for describe nodes ...
	I0602 11:12:56.392386   13778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0602 11:12:56.445060   13778 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0602 11:12:56.445088   13778 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0602 11:12:56.445103   13778 out.go:239] * 
	W0602 11:12:56.445207   13778 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:12:56.445222   13778 out.go:239] * 
	W0602 11:12:56.445819   13778 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 11:12:56.530257   13778 out.go:177] 
	W0602 11:12:56.572600   13778 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0602 11:12:56.572701   13778 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0602 11:12:56.572743   13778 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0602 11:12:56.593452   13778 out.go:177] 
	I0602 11:13:14.184065   14271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (29.496567408s)
	I0602 11:13:14.184130   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:13:14.194290   14271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:13:14.202145   14271 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:13:14.202192   14271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:13:14.210100   14271 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:13:14.210122   14271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:13:14.712305   14271 out.go:204]   - Generating certificates and keys ...
	I0602 11:13:15.565732   14271 out.go:204]   - Booting up control plane ...
	I0602 11:13:22.111263   14271 out.go:204]   - Configuring RBAC rules ...
	I0602 11:13:22.490451   14271 cni.go:95] Creating CNI manager for ""
	I0602 11:13:22.490462   14271 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:13:22.490476   14271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 11:13:22.490574   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=default-k8s-different-port-20220602110711-2113 minikube.k8s.io/updated_at=2022_06_02T11_13_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:22.490580   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:22.596110   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:22.677315   14271 ops.go:34] apiserver oom_adj: -16
	I0602 11:13:23.220603   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:23.720089   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:24.218643   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:24.718821   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:25.218844   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:25.718927   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:26.220735   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:26.718665   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:27.220534   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:27.719096   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:28.219369   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:28.718683   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:29.218768   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:29.718884   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:30.218745   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:30.719801   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:31.220266   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:31.718699   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:32.220130   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:32.719009   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:33.218958   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:33.720809   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:34.220786   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:34.718815   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:35.218757   14271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:13:35.274837   14271 kubeadm.go:1045] duration metric: took 12.78411621s to wait for elevateKubeSystemPrivileges.
	I0602 11:13:35.274851   14271 kubeadm.go:397] StartCluster complete in 5m15.265389598s
	I0602 11:13:35.274869   14271 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:13:35.274953   14271 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:13:35.275477   14271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:13:35.790361   14271 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220602110711-2113" rescaled to 1
	I0602 11:13:35.790398   14271 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 11:13:35.829728   14271 out.go:177] * Verifying Kubernetes components...
	I0602 11:13:35.790423   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 11:13:35.790448   14271 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 11:13:35.790558   14271 config.go:178] Loaded profile config "default-k8s-different-port-20220602110711-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:13:35.888819   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:13:35.888817   14271 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888843   14271 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888865   14271 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220602110711-2113"
	W0602 11:13:35.888876   14271 addons.go:165] addon storage-provisioner should already be in state true
	I0602 11:13:35.888867   14271 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888869   14271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888875   14271 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888904   14271 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:35.888920   14271 host.go:66] Checking if "default-k8s-different-port-20220602110711-2113" exists ...
	W0602 11:13:35.888925   14271 addons.go:165] addon dashboard should already be in state true
	I0602 11:13:35.888947   14271 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220602110711-2113"
	W0602 11:13:35.888967   14271 addons.go:165] addon metrics-server should already be in state true
	I0602 11:13:35.888978   14271 host.go:66] Checking if "default-k8s-different-port-20220602110711-2113" exists ...
	I0602 11:13:35.889061   14271 host.go:66] Checking if "default-k8s-different-port-20220602110711-2113" exists ...
	I0602 11:13:35.889232   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:35.889377   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:35.890065   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:35.892758   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:35.987270   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:35.987269   14271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 11:13:36.115814   14271 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 11:13:36.005658   14271 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:36.042737   14271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 11:13:36.078510   14271 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	W0602 11:13:36.115876   14271 addons.go:165] addon default-storageclass should already be in state true
	I0602 11:13:36.153934   14271 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 11:13:36.174686   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 11:13:36.174720   14271 host.go:66] Checking if "default-k8s-different-port-20220602110711-2113" exists ...
	I0602 11:13:36.174772   14271 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:13:36.211865   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 11:13:36.211944   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:36.211966   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:36.214784   14271 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602110711-2113 --format={{.State.Status}}
	I0602 11:13:36.248803   14271 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 11:13:36.268871   14271 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220602110711-2113" to be "Ready" ...
	I0602 11:13:36.285816   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 11:13:36.285833   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 11:13:36.285936   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:36.293989   14271 node_ready.go:49] node "default-k8s-different-port-20220602110711-2113" has status "Ready":"True"
	I0602 11:13:36.294010   14271 node_ready.go:38] duration metric: took 8.325769ms waiting for node "default-k8s-different-port-20220602110711-2113" to be "Ready" ...
	I0602 11:13:36.294020   14271 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:13:36.303564   14271 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-q7f6l" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:36.316138   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:13:36.319822   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:13:36.343187   14271 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 11:13:36.343200   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 11:13:36.343266   14271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602110711-2113
	I0602 11:13:36.386409   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:13:36.428996   14271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/default-k8s-different-port-20220602110711-2113/id_rsa Username:docker}
	I0602 11:13:36.487887   14271 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 11:13:36.487899   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 11:13:36.565029   14271 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 11:13:36.565043   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 11:13:36.566040   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 11:13:36.566052   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 11:13:36.568172   14271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:13:36.582892   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 11:13:36.582907   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 11:13:36.584953   14271 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:13:36.584970   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 11:13:36.667739   14271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:13:36.677808   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 11:13:36.677830   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 11:13:36.683133   14271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 11:13:36.769655   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 11:13:36.769670   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 11:13:36.856979   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 11:13:36.856995   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 11:13:36.886658   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 11:13:36.886671   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 11:13:37.062085   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 11:13:37.062114   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 11:13:37.163633   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 11:13:37.163656   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 11:13:37.188519   14271 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.201194084s)
	I0602 11:13:37.188539   14271 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 11:13:37.255684   14271 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:13:37.255698   14271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 11:13:37.292360   14271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:13:37.552623   14271 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220602110711-2113"
	I0602 11:13:38.220990   14271 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0602 11:13:38.279795   14271 addons.go:417] enableAddons completed in 2.489282965s
	I0602 11:13:38.320701   14271 pod_ready.go:102] pod "coredns-64897985d-q7f6l" in "kube-system" namespace has status "Ready":"False"
	I0602 11:13:39.321022   14271 pod_ready.go:92] pod "coredns-64897985d-q7f6l" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.321036   14271 pod_ready.go:81] duration metric: took 3.017402616s waiting for pod "coredns-64897985d-q7f6l" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.321043   14271 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-qp56l" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.350329   14271 pod_ready.go:92] pod "coredns-64897985d-qp56l" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.350338   14271 pod_ready.go:81] duration metric: took 29.290028ms waiting for pod "coredns-64897985d-qp56l" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.350344   14271 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.355598   14271 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.355608   14271 pod_ready.go:81] duration metric: took 5.258951ms waiting for pod "etcd-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.355614   14271 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.360359   14271 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.360370   14271 pod_ready.go:81] duration metric: took 4.750812ms waiting for pod "kube-apiserver-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.360384   14271 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.365311   14271 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:39.365325   14271 pod_ready.go:81] duration metric: took 4.926738ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:39.365337   14271 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbj6w" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:40.724473   14271 pod_ready.go:92] pod "kube-proxy-xbj6w" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:40.724488   14271 pod_ready.go:81] duration metric: took 1.359119427s waiting for pod "kube-proxy-xbj6w" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:40.724496   14271 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:40.919321   14271 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:13:40.919331   14271 pod_ready.go:81] duration metric: took 194.825557ms waiting for pod "kube-scheduler-default-k8s-different-port-20220602110711-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:13:40.919336   14271 pod_ready.go:38] duration metric: took 4.62522339s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:13:40.919356   14271 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:13:40.919409   14271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:13:40.931035   14271 api_server.go:71] duration metric: took 5.140531013s to wait for apiserver process to appear ...
	I0602 11:13:40.931050   14271 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:13:40.931057   14271 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52983/healthz ...
	I0602 11:13:40.936209   14271 api_server.go:266] https://127.0.0.1:52983/healthz returned 200:
	ok
	I0602 11:13:40.937293   14271 api_server.go:140] control plane version: v1.23.6
	I0602 11:13:40.937301   14271 api_server.go:130] duration metric: took 6.246771ms to wait for apiserver health ...
	I0602 11:13:40.937305   14271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:13:41.120621   14271 system_pods.go:59] 9 kube-system pods found
	I0602 11:13:41.120635   14271 system_pods.go:61] "coredns-64897985d-q7f6l" [9348f86d-08db-41f1-a8fa-33f0b74cf0ab] Running
	I0602 11:13:41.120638   14271 system_pods.go:61] "coredns-64897985d-qp56l" [2e9f42d9-06d2-44c5-ab59-2560b50fd5c5] Running
	I0602 11:13:41.120642   14271 system_pods.go:61] "etcd-default-k8s-different-port-20220602110711-2113" [f8512c1a-947d-4506-b868-13343b661686] Running
	I0602 11:13:41.120647   14271 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220602110711-2113" [78222b72-98f6-4017-92ea-655597e0b1e9] Running
	I0602 11:13:41.120651   14271 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220602110711-2113" [52d52399-ee97-41c1-93be-483fe82a7b3b] Running
	I0602 11:13:41.120655   14271 system_pods.go:61] "kube-proxy-xbj6w" [e3405b28-0afd-4a57-b9aa-4c12c8880eee] Running
	I0602 11:13:41.120670   14271 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220602110711-2113" [a8d2d945-2501-4768-8ead-483ebbe19526] Running
	I0602 11:13:41.120677   14271 system_pods.go:61] "metrics-server-b955d9d8-mmzb2" [e28d8ad9-0512-4720-8607-2033e71a4b2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:13:41.120682   14271 system_pods.go:61] "storage-provisioner" [15b1bcd9-2251-4762-bbe4-61e3c8db0e3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:13:41.120686   14271 system_pods.go:74] duration metric: took 183.374077ms to wait for pod list to return data ...
	I0602 11:13:41.120691   14271 default_sa.go:34] waiting for default service account to be created ...
	I0602 11:13:41.318742   14271 default_sa.go:45] found service account: "default"
	I0602 11:13:41.318755   14271 default_sa.go:55] duration metric: took 198.056977ms for default service account to be created ...
	I0602 11:13:41.318760   14271 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 11:13:41.524024   14271 system_pods.go:86] 9 kube-system pods found
	I0602 11:13:41.524043   14271 system_pods.go:89] "coredns-64897985d-q7f6l" [9348f86d-08db-41f1-a8fa-33f0b74cf0ab] Running
	I0602 11:13:41.524050   14271 system_pods.go:89] "coredns-64897985d-qp56l" [2e9f42d9-06d2-44c5-ab59-2560b50fd5c5] Running
	I0602 11:13:41.524056   14271 system_pods.go:89] "etcd-default-k8s-different-port-20220602110711-2113" [f8512c1a-947d-4506-b868-13343b661686] Running
	I0602 11:13:41.524062   14271 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220602110711-2113" [78222b72-98f6-4017-92ea-655597e0b1e9] Running
	I0602 11:13:41.524068   14271 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220602110711-2113" [52d52399-ee97-41c1-93be-483fe82a7b3b] Running
	I0602 11:13:41.524072   14271 system_pods.go:89] "kube-proxy-xbj6w" [e3405b28-0afd-4a57-b9aa-4c12c8880eee] Running
	I0602 11:13:41.524078   14271 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220602110711-2113" [a8d2d945-2501-4768-8ead-483ebbe19526] Running
	I0602 11:13:41.524090   14271 system_pods.go:89] "metrics-server-b955d9d8-mmzb2" [e28d8ad9-0512-4720-8607-2033e71a4b2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:13:41.524098   14271 system_pods.go:89] "storage-provisioner" [15b1bcd9-2251-4762-bbe4-61e3c8db0e3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:13:41.524112   14271 system_pods.go:126] duration metric: took 205.343684ms to wait for k8s-apps to be running ...
	I0602 11:13:41.524125   14271 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 11:13:41.524183   14271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:13:41.538402   14271 system_svc.go:56] duration metric: took 14.275295ms WaitForService to wait for kubelet.
	I0602 11:13:41.538416   14271 kubeadm.go:572] duration metric: took 5.747903146s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 11:13:41.538436   14271 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:13:41.718284   14271 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:13:41.718296   14271 node_conditions.go:123] node cpu capacity is 6
	I0602 11:13:41.718308   14271 node_conditions.go:105] duration metric: took 179.851499ms to run NodePressure ...
	I0602 11:13:41.718315   14271 start.go:213] waiting for startup goroutines ...
	I0602 11:13:41.748420   14271 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 11:13:41.770365   14271 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220602110711-2113" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:08:16 UTC, end at Thu 2022-06-02 18:14:39 UTC. --
	Jun 02 18:13:11 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:11.372698211Z" level=info msg="ignoring event" container=fe28ec423dfc588f4c91ef67ea093e89e646b9b36106e1a3383c241a4104f1a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:11 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:11.492763172Z" level=info msg="ignoring event" container=b027d25457fadf51caeba229c1421e38ec3d07a7f260c1328ad4e7ae57b8a241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:12 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:12.711909080Z" level=info msg="ignoring event" container=e3f092a4e4b8acf18687dc60526070c9d8d232612cc91cb52cf0731073f02c08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:12 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:12.820503412Z" level=info msg="ignoring event" container=f6fdeb2d2024328e194ac49cf725fd4cd9a0812f07d10b8455a0450e16b6313e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:12 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:12.919784024Z" level=info msg="ignoring event" container=305edf5a3e80c60246fe6dfcc5a28ff8e49202a0a46640dbb798f10250ab3fb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:13 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:13.025058512Z" level=info msg="ignoring event" container=c3d571000adc323d3a9bf6988309cca5abd5bafea930f5331d5ba851a3d93907 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:13 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:13.126153005Z" level=info msg="ignoring event" container=e26bcb8cbcbfcdc81456207f69301a6b88ef553e385ce754f54997b41f58ce4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:13 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:13.249009502Z" level=info msg="ignoring event" container=63db6ec64f911f2d15fee1d3fb9f928cfdf61ac996e2861115beed0a83f23968 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:38 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:38.408686529Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:38 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:38.408729060Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:38 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:38.411073194Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:39 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:39.773797307Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 02 18:13:42 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:42.085885151Z" level=info msg="ignoring event" container=b8347f055fbfe1cddb5e3632fef6cfa8376ccd28bc0da6c84d772dd2384f59e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:42 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:42.174100815Z" level=info msg="ignoring event" container=bbd08bec2896f6c80cd1992ef5444b7ef036a0d97623f6c50c88beed1da58407 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:45 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:45.203257961Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:13:45 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:45.433252897Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:13:48 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:48.488065827Z" level=info msg="ignoring event" container=8b8f98aacf94d6053a9a341c7aed14010e780745199b2aa1c38cea05dafc2c82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:48 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:48.721293917Z" level=info msg="ignoring event" container=19df3a02ce9fabb77b369289713d678fbc579775d435d90d17e2bb50649da0cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:13:51 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:51.708060537Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:51 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:51.708101761Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:13:51 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:13:51.709293217Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:14:37 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:14:37.072653002Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:14:37 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:14:37.072677165Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:14:37 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:14:37.074004108Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:14:38 default-k8s-different-port-20220602110711-2113 dockerd[130]: time="2022-06-02T18:14:38.253628759Z" level=info msg="ignoring event" container=1d4f58b8ab58b91cb80b113e787473e46c7fa118eb2dd33bba8563c527bfc47f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	1d4f58b8ab58b       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   76da2afb3b84e
	b32592514baa8       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   56 seconds ago       Running             kubernetes-dashboard        0                   2da3cbdc53b71
	10fca54a3cf3e       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   dac1a4c4d3db8
	46ce0d0cc477f       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   3e3e43cee22d7
	33c5ea97096cf       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   504e0bb47eb30
	46a270fcaca30       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   1bd2cc75567c8
	700916401ac8b       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   8b2a93e6f922f
	442109d4ab3c3       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   ba6ef3456d1c8
	a1efd30b0df11       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   22c629e5f10ee
	
	* 
	* ==> coredns [33c5ea97096c] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220602110711-2113
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220602110711-2113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=default-k8s-different-port-20220602110711-2113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T11_13_22_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 18:13:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220602110711-2113
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 18:14:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 18:14:33 +0000   Thu, 02 Jun 2022 18:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 18:14:33 +0000   Thu, 02 Jun 2022 18:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 18:14:33 +0000   Thu, 02 Jun 2022 18:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 18:14:33 +0000   Thu, 02 Jun 2022 18:14:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    default-k8s-different-port-20220602110711-2113
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                f4097a03-fe19-4f34-a68b-cf1227538da7
	  Boot ID:                    a475dd08-72ba-4c6d-89c1-75a58adc3783
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-qp56l                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     66s
	  kube-system                 etcd-default-k8s-different-port-20220602110711-2113                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220602110711-2113             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220602110711-2113    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-xbj6w                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220602110711-2113             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 metrics-server-b955d9d8-mmzb2                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-xt4wh                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-hqkxc                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 63s   kube-proxy  
	  Normal  NodeHasSufficientMemory  78s   kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s   kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s   kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 78s   kubelet     Starting kubelet.
	  Normal  NodeReady                68s   kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeReady
	  Normal  Starting                 7s    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s    kubelet     Node default-k8s-different-port-20220602110711-2113 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [442109d4ab3c] <==
	* {"level":"info","ts":"2022-06-02T18:13:16.907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-06-02T18:13:16.908Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T18:13:16.910Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:13:17.703Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:default-k8s-different-port-20220602110711-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:13:17.704Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:13:17.705Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:13:17.705Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T18:13:17.707Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:13:17.707Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  18:14:40 up  1:02,  0 users,  load average: 1.00, 0.86, 1.00
	Linux default-k8s-different-port-20220602110711-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [700916401ac8] <==
	* I0602 18:13:20.535613       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 18:13:20.603648       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 18:13:20.704024       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0602 18:13:20.708113       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0602 18:13:20.708826       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 18:13:20.711789       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 18:13:21.354606       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 18:13:22.325496       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 18:13:22.331089       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0602 18:13:22.338882       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 18:13:22.503874       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 18:13:34.192335       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 18:13:35.140615       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 18:13:37.183853       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 18:13:37.495370       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.107.79.208]
	I0602 18:13:38.180001       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.44.30]
	I0602 18:13:38.195066       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.6.255]
	W0602 18:13:38.301513       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:13:38.301612       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:13:38.301619       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0602 18:14:38.260498       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:14:38.260569       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:14:38.260576       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [a1efd30b0df1] <==
	* I0602 18:13:35.145860       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xbj6w"
	I0602 18:13:35.299057       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 18:13:35.302554       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-q7f6l"
	I0602 18:13:37.284978       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 18:13:37.291990       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-mmzb2"
	I0602 18:13:38.019546       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 18:13:38.025265       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.059404       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.067535       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	I0602 18:13:38.067582       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.067715       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.068520       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.074696       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0602 18:13:38.075128       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.075147       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.078813       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.078865       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.084264       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.084406       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:13:38.086799       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:13:38.086849       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:13:38.161725       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-hqkxc"
	I0602 18:13:38.164955       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-xt4wh"
	E0602 18:14:33.141656       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 18:14:33.148987       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [46ce0d0cc477] <==
	* I0602 18:13:37.088700       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:13:37.088745       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:13:37.088792       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:13:37.180591       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:13:37.180612       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:13:37.180616       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:13:37.180652       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:13:37.180991       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:13:37.181673       1 config.go:317] "Starting service config controller"
	I0602 18:13:37.181680       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:13:37.181829       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:13:37.181950       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:13:37.282107       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 18:13:37.282173       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [46a270fcaca3] <==
	* W0602 18:13:19.289692       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 18:13:19.289826       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 18:13:19.289997       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 18:13:19.290052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 18:13:19.290002       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 18:13:19.290066       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 18:13:20.197528       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:13:20.197578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:13:20.199642       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:13:20.199741       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:13:20.223687       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 18:13:20.223803       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0602 18:13:20.232582       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 18:13:20.232617       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 18:13:20.234192       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 18:13:20.234224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 18:13:20.371321       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 18:13:20.371371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 18:13:20.432041       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0602 18:13:20.432076       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0602 18:13:20.563538       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 18:13:20.563572       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 18:13:20.691881       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0602 18:13:21.368883       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0602 18:13:23.086818       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:08:16 UTC, end at Thu 2022-06-02 18:14:41 UTC. --
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671977    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49qkd\" (UniqueName: \"kubernetes.io/projected/e28d8ad9-0512-4720-8607-2033e71a4b2b-kube-api-access-49qkd\") pod \"metrics-server-b955d9d8-mmzb2\" (UID: \"e28d8ad9-0512-4720-8607-2033e71a4b2b\") " pod="kube-system/metrics-server-b955d9d8-mmzb2"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.671992    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15b1bcd9-2251-4762-bbe4-61e3c8db0e3c-tmp\") pod \"storage-provisioner\" (UID: \"15b1bcd9-2251-4762-bbe4-61e3c8db0e3c\") " pod="kube-system/storage-provisioner"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672009    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64s7\" (UniqueName: \"kubernetes.io/projected/15b1bcd9-2251-4762-bbe4-61e3c8db0e3c-kube-api-access-r64s7\") pod \"storage-provisioner\" (UID: \"15b1bcd9-2251-4762-bbe4-61e3c8db0e3c\") " pod="kube-system/storage-provisioner"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672023    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3405b28-0afd-4a57-b9aa-4c12c8880eee-xtables-lock\") pod \"kube-proxy-xbj6w\" (UID: \"e3405b28-0afd-4a57-b9aa-4c12c8880eee\") " pod="kube-system/kube-proxy-xbj6w"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672039    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hkpv\" (UniqueName: \"kubernetes.io/projected/e3405b28-0afd-4a57-b9aa-4c12c8880eee-kube-api-access-7hkpv\") pod \"kube-proxy-xbj6w\" (UID: \"e3405b28-0afd-4a57-b9aa-4c12c8880eee\") " pod="kube-system/kube-proxy-xbj6w"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672053    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3405b28-0afd-4a57-b9aa-4c12c8880eee-lib-modules\") pod \"kube-proxy-xbj6w\" (UID: \"e3405b28-0afd-4a57-b9aa-4c12c8880eee\") " pod="kube-system/kube-proxy-xbj6w"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672066    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2b45dde7-82b4-439a-b822-381b15db860e-tmp-volume\") pod \"kubernetes-dashboard-cd7c84bfc-hqkxc\" (UID: \"2b45dde7-82b4-439a-b822-381b15db860e\") " pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-hqkxc"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672079    7110 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e9f42d9-06d2-44c5-ab59-2560b50fd5c5-config-volume\") pod \"coredns-64897985d-qp56l\" (UID: \"2e9f42d9-06d2-44c5-ab59-2560b50fd5c5\") " pod="kube-system/coredns-64897985d-qp56l"
	Jun 02 18:14:34 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:34.672088    7110 reconciler.go:157] "Reconciler: start to sync state"
	Jun 02 18:14:35 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:35.847702    7110 request.go:665] Waited for 1.152339589s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jun 02 18:14:35 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:35.869275    7110 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220602110711-2113\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220602110711-2113"
	Jun 02 18:14:36 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:36.052627    7110 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220602110711-2113\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220602110711-2113"
	Jun 02 18:14:36 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:36.252529    7110 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220602110711-2113\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220602110711-2113"
	Jun 02 18:14:36 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:36.469264    7110 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220602110711-2113\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220602110711-2113"
	Jun 02 18:14:37 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:37.074419    7110 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 02 18:14:37 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:37.074475    7110 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 02 18:14:37 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:37.074614    7110 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-49qkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Probe
Handler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},
TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-mmzb2_kube-system(e28d8ad9-0512-4720-8607-2033e71a4b2b): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 02 18:14:37 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:37.074689    7110 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-mmzb2" podUID=e28d8ad9-0512-4720-8607-2033e71a4b2b
	Jun 02 18:14:37 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:37.953369    7110 scope.go:110] "RemoveContainer" containerID="19df3a02ce9fabb77b369289713d678fbc579775d435d90d17e2bb50649da0cb"
	Jun 02 18:14:38 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:38.718197    7110 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-xt4wh through plugin: invalid network status for"
	Jun 02 18:14:38 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:38.723433    7110 scope.go:110] "RemoveContainer" containerID="19df3a02ce9fabb77b369289713d678fbc579775d435d90d17e2bb50649da0cb"
	Jun 02 18:14:38 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:38.723916    7110 scope.go:110] "RemoveContainer" containerID="1d4f58b8ab58b91cb80b113e787473e46c7fa118eb2dd33bba8563c527bfc47f"
	Jun 02 18:14:38 default-k8s-different-port-20220602110711-2113 kubelet[7110]: E0602 18:14:38.724109    7110 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-xt4wh_kubernetes-dashboard(4a581662-4b96-4aff-a293-48be5f24767e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-xt4wh" podUID=4a581662-4b96-4aff-a293-48be5f24767e
	Jun 02 18:14:38 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:38.829069    7110 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Jun 02 18:14:39 default-k8s-different-port-20220602110711-2113 kubelet[7110]: I0602 18:14:39.729386    7110 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-xt4wh through plugin: invalid network status for"
	
	* 
	* ==> kubernetes-dashboard [b32592514baa] <==
	* 2022/06/02 18:13:44 Using namespace: kubernetes-dashboard
	2022/06/02 18:13:44 Using in-cluster config to connect to apiserver
	2022/06/02 18:13:44 Using secret token for csrf signing
	2022/06/02 18:13:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/02 18:13:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/02 18:13:44 Successful initial request to the apiserver, version: v1.23.6
	2022/06/02 18:13:44 Generating JWE encryption key
	2022/06/02 18:13:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/02 18:13:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/02 18:13:45 Initializing JWE encryption key from synchronized object
	2022/06/02 18:13:45 Creating in-cluster Sidecar client
	2022/06/02 18:13:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:13:45 Serving insecurely on HTTP port: 9090
	2022/06/02 18:14:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:13:44 Starting overwatch
	
	* 
	* ==> storage-provisioner [10fca54a3cf3] <==
	* I0602 18:13:38.502207       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:13:38.510010       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:13:38.510057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:13:38.514816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:13:38.515010       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220602110711-2113_79550121-4034-4072-a2ef-c0cb066261bf!
	I0602 18:13:38.515376       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a4013fb-89b6-4394-9436-841beb6e1d6b", APIVersion:"v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220602110711-2113_79550121-4034-4072-a2ef-c0cb066261bf became leader
	I0602 18:13:38.615985       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220602110711-2113_79550121-4034-4072-a2ef-c0cb066261bf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220602110711-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-mmzb2
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220602110711-2113 describe pod metrics-server-b955d9d8-mmzb2
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220602110711-2113 describe pod metrics-server-b955d9d8-mmzb2: exit status 1 (262.14356ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-mmzb2" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220602110711-2113 describe pod metrics-server-b955d9d8-mmzb2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (43.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (49.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220602111446-2113 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113: exit status 2 (16.100481921s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113: exit status 2 (16.102206417s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220602111446-2113 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220602111446-2113
helpers_test.go:235: (dbg) docker inspect newest-cni-20220602111446-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30",
	        "Created": "2022-06-02T18:14:53.071653941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:15:38.429000634Z",
	            "FinishedAt": "2022-06-02T18:15:36.450648512Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30/hosts",
	        "LogPath": "/var/lib/docker/containers/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30-json.log",
	        "Name": "/newest-cni-20220602111446-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220602111446-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220602111446-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ba2cea67523c33ad0b49af10e5240c2230f214aeb2d5d658755f1055661da5ff-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba2cea67523c33ad0b49af10e5240c2230f214aeb2d5d658755f1055661da5ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba2cea67523c33ad0b49af10e5240c2230f214aeb2d5d658755f1055661da5ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba2cea67523c33ad0b49af10e5240c2230f214aeb2d5d658755f1055661da5ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220602111446-2113",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220602111446-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220602111446-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220602111446-2113",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220602111446-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d09d03dce3b8d3f5936a98cf2ceea7fbefd2b4ddf42cc4f9dedc11ff734d55c8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53982"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53983"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53984"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53985"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d09d03dce3b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220602111446-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f35558335646",
	                        "newest-cni-20220602111446-2113"
	                    ],
	                    "NetworkID": "666a37f7840188b1f9b0f32678d9a5bc2c4b1c17547ec3fd4a4cd1090a45f919",
	                    "EndpointID": "ba9648796e12ff373bad6a847e3b0164286f95e7cc1692f5d621b0b70a7a564e",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220602111446-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220602111446-2113 logs -n 25: (4.332851188s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | no-preload-20220602105919-2113                             | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:06 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220602105919-2113                             | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                             |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220602105906-2113                        | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:12 PDT | 02 Jun 22 11:13 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220602110711-2113             | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220602110711-2113             | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:15:37
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:15:37.112400   14877 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:15:37.112636   14877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:15:37.112642   14877 out.go:309] Setting ErrFile to fd 2...
	I0602 11:15:37.112646   14877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:15:37.112746   14877 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:15:37.113006   14877 out.go:303] Setting JSON to false
	I0602 11:15:37.128139   14877 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4506,"bootTime":1654189231,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:15:37.128239   14877 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:15:37.166432   14877 out.go:177] * [newest-cni-20220602111446-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:15:37.204201   14877 notify.go:193] Checking for updates...
	I0602 11:15:37.226160   14877 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:15:37.247932   14877 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:15:37.269028   14877 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:15:37.311109   14877 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:15:37.332132   14877 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:15:37.354856   14877 config.go:178] Loaded profile config "newest-cni-20220602111446-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:15:37.355487   14877 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:15:37.425618   14877 docker.go:137] docker version: linux-20.10.14
	I0602 11:15:37.425792   14877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:15:37.552837   14877 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:15:37.492069854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:15:37.596427   14877 out.go:177] * Using the docker driver based on existing profile
	I0602 11:15:37.617563   14877 start.go:284] selected driver: docker
	I0602 11:15:37.617592   14877 start.go:806] validating driver "docker" against &{Name:newest-cni-20220602111446-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602111446-2113 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[a
piserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:15:37.617786   14877 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:15:37.621183   14877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:15:37.747027   14877 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:15:37.686581534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:15:37.747205   14877 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0602 11:15:37.747221   14877 cni.go:95] Creating CNI manager for ""
	I0602 11:15:37.747229   14877 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:15:37.747237   14877 start_flags.go:306] config:
	{Name:newest-cni-20220602111446-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602111446-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_
ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:15:37.769359   14877 out.go:177] * Starting control plane node newest-cni-20220602111446-2113 in cluster newest-cni-20220602111446-2113
	I0602 11:15:37.812916   14877 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:15:37.833742   14877 out.go:177] * Pulling base image ...
	I0602 11:15:37.877054   14877 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:15:37.877055   14877 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:15:37.877151   14877 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 11:15:37.877170   14877 cache.go:57] Caching tarball of preloaded images
	I0602 11:15:37.877383   14877 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:15:37.877412   14877 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 11:15:37.878438   14877 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/config.json ...
	I0602 11:15:37.942040   14877 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:15:37.942055   14877 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:15:37.942065   14877 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:15:37.942104   14877 start.go:352] acquiring machines lock for newest-cni-20220602111446-2113: {Name:mk60bd3a84f323b50cc7374421d304aa58ac015f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:15:37.942185   14877 start.go:356] acquired machines lock for "newest-cni-20220602111446-2113" in 57.699µs
	I0602 11:15:37.942204   14877 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:15:37.942214   14877 fix.go:55] fixHost starting: 
	I0602 11:15:37.942447   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:38.009634   14877 fix.go:103] recreateIfNeeded on newest-cni-20220602111446-2113: state=Stopped err=<nil>
	W0602 11:15:38.009662   14877 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:15:38.053451   14877 out.go:177] * Restarting existing docker container for "newest-cni-20220602111446-2113" ...
	I0602 11:15:38.075533   14877 cli_runner.go:164] Run: docker start newest-cni-20220602111446-2113
	I0602 11:15:38.429318   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:38.501319   14877 kic.go:416] container "newest-cni-20220602111446-2113" state is running.
	I0602 11:15:38.501922   14877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602111446-2113
	I0602 11:15:38.576290   14877 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/config.json ...
	I0602 11:15:38.576686   14877 machine.go:88] provisioning docker machine ...
	I0602 11:15:38.576711   14877 ubuntu.go:169] provisioning hostname "newest-cni-20220602111446-2113"
	I0602 11:15:38.576772   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:38.649747   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:38.649929   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:38.649943   14877 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220602111446-2113 && echo "newest-cni-20220602111446-2113" | sudo tee /etc/hostname
	I0602 11:15:38.773710   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220602111446-2113
	
	I0602 11:15:38.773805   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:38.846243   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:38.846480   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:38.846500   14877 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220602111446-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220602111446-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220602111446-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:15:38.972226   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:15:38.972246   14877 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:15:38.972265   14877 ubuntu.go:177] setting up certificates
	I0602 11:15:38.972276   14877 provision.go:83] configureAuth start
	I0602 11:15:38.972349   14877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602111446-2113
	I0602 11:15:39.044589   14877 provision.go:138] copyHostCerts
	I0602 11:15:39.044674   14877 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:15:39.044683   14877 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:15:39.044772   14877 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:15:39.045030   14877 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:15:39.045038   14877 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:15:39.045096   14877 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:15:39.045237   14877 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:15:39.045242   14877 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:15:39.045299   14877 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:15:39.045424   14877 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220602111446-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220602111446-2113]
	I0602 11:15:39.214567   14877 provision.go:172] copyRemoteCerts
	I0602 11:15:39.214630   14877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:15:39.214676   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:39.284765   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:39.370464   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 11:15:39.387895   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:15:39.404256   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0602 11:15:39.420397   14877 provision.go:86] duration metric: configureAuth took 448.099851ms
	I0602 11:15:39.420410   14877 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:15:39.420577   14877 config.go:178] Loaded profile config "newest-cni-20220602111446-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:15:39.420634   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:39.491169   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:39.491311   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:39.491323   14877 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:15:39.605393   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:15:39.605405   14877 ubuntu.go:71] root file system type: overlay
	I0602 11:15:39.605543   14877 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:15:39.605624   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:39.676737   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:39.676897   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:39.676942   14877 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:15:39.800313   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:15:39.800397   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:39.871024   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:39.871200   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:39.871222   14877 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:15:39.990940   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:15:39.990977   14877 machine.go:91] provisioned docker machine in 1.414258393s
	I0602 11:15:39.990986   14877 start.go:306] post-start starting for "newest-cni-20220602111446-2113" (driver="docker")
	I0602 11:15:39.990993   14877 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:15:39.991058   14877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:15:39.991110   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:40.061996   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:40.147842   14877 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:15:40.151457   14877 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:15:40.151470   14877 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:15:40.151478   14877 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:15:40.151482   14877 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:15:40.151490   14877 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:15:40.151589   14877 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:15:40.151721   14877 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:15:40.151868   14877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:15:40.158741   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:15:40.175787   14877 start.go:309] post-start completed in 184.787196ms
	I0602 11:15:40.175854   14877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:15:40.175902   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:40.246857   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:40.329877   14877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:15:40.334319   14877 fix.go:57] fixHost completed within 2.39206541s
	I0602 11:15:40.334330   14877 start.go:81] releasing machines lock for "newest-cni-20220602111446-2113", held for 2.392096368s
	I0602 11:15:40.334404   14877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602111446-2113
	I0602 11:15:40.405453   14877 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:15:40.405461   14877 ssh_runner.go:195] Run: systemctl --version
	I0602 11:15:40.405541   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:40.405536   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:40.482040   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:40.484877   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:40.692481   14877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:15:40.704677   14877 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:15:40.714842   14877 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:15:40.714890   14877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:15:40.724185   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:15:40.736917   14877 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:15:40.807790   14877 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:15:40.870552   14877 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:15:40.880621   14877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:15:40.943966   14877 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:15:40.953302   14877 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:15:40.988211   14877 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:15:41.069261   14877 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 11:15:41.069447   14877 cli_runner.go:164] Run: docker exec -t newest-cni-20220602111446-2113 dig +short host.docker.internal
	I0602 11:15:41.217049   14877 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:15:41.217148   14877 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:15:41.221595   14877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:15:41.232002   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:41.324236   14877 out.go:177]   - kubelet.network-plugin=cni
	I0602 11:15:41.346204   14877 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0602 11:15:41.368000   14877 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:15:41.368122   14877 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:15:41.399128   14877 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 11:15:41.399144   14877 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:15:41.399220   14877 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:15:41.427956   14877 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 11:15:41.427979   14877 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:15:41.428063   14877 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:15:41.501895   14877 cni.go:95] Creating CNI manager for ""
	I0602 11:15:41.501908   14877 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:15:41.501923   14877 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0602 11:15:41.501936   14877 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220602111446-2113 NodeName:newest-cni-20220602111446-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false]
Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:15:41.502036   14877 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220602111446-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:15:41.502105   14877 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220602111446-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602111446-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 11:15:41.502163   14877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 11:15:41.509619   14877 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:15:41.509669   14877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:15:41.516795   14877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0602 11:15:41.529376   14877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:15:41.541872   14877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2187 bytes)
	I0602 11:15:41.554119   14877 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:15:41.557752   14877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:15:41.567058   14877 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113 for IP: 192.168.58.2
	I0602 11:15:41.567162   14877 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:15:41.567215   14877 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:15:41.567289   14877 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/client.key
	I0602 11:15:41.567348   14877 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/apiserver.key.cee25041
	I0602 11:15:41.567399   14877 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/proxy-client.key
	I0602 11:15:41.567594   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:15:41.567628   14877 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:15:41.567640   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:15:41.567673   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:15:41.567702   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:15:41.567735   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:15:41.567799   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:15:41.568309   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:15:41.585232   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 11:15:41.601960   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:15:41.618739   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:15:41.635310   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:15:41.651915   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:15:41.669160   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:15:41.685715   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:15:41.703230   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:15:41.722127   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:15:41.739378   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:15:41.757160   14877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:15:41.769434   14877 ssh_runner.go:195] Run: openssl version
	I0602 11:15:41.774759   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:15:41.782752   14877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:15:41.786642   14877 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:15:41.786684   14877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:15:41.791948   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:15:41.799255   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:15:41.807112   14877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:15:41.811045   14877 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:15:41.811082   14877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:15:41.816215   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:15:41.823551   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:15:41.831043   14877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:15:41.834707   14877 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:15:41.834744   14877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:15:41.839706   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:15:41.846754   14877 kubeadm.go:395] StartCluster: {Name:newest-cni-20220602111446-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602111446-2113 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_r
unning:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:15:41.846854   14877 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:15:41.875704   14877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:15:41.883874   14877 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:15:41.883889   14877 kubeadm.go:626] restartCluster start
	I0602 11:15:41.883939   14877 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:15:41.891051   14877 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:41.891119   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:41.963185   14877 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220602111446-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:15:41.963373   14877 kubeconfig.go:127] "newest-cni-20220602111446-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:15:41.963722   14877 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:15:41.965032   14877 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:15:41.972595   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:41.972647   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:41.980555   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.180928   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.181038   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.192169   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.382715   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.382839   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.394533   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.580996   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.581099   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.592126   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.782772   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.782882   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.793804   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.982779   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.982903   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.994213   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.181683   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.181817   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.192930   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.381083   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.381152   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.390694   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.582720   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.582874   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.595350   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.782814   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.782921   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.793934   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.982410   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.982547   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.993503   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.181271   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.181368   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.189799   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.381376   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.381517   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.393282   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.580945   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.581078   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.592087   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.782598   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.782792   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.793424   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.981183   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.981327   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.991725   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.991735   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.991782   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.999867   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.999879   14877 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:15:44.999886   14877 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:15:44.999942   14877 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:15:45.033438   14877 docker.go:442] Stopping containers: [e1d64bed7589 8fa6bceffcfa 617be3b7501b bdf3626b54a9 8000fe1582b5 f293ab9d6a43 1d0857401880 fd78a96b5164 a65395e30f8e 03154e30d8a2 6c8b9d467621 ffbfaa032774 5653f77280da 47adf6bc9949 ff8ed0ab8632 36890e67d5c5]
	I0602 11:15:45.033509   14877 ssh_runner.go:195] Run: docker stop e1d64bed7589 8fa6bceffcfa 617be3b7501b bdf3626b54a9 8000fe1582b5 f293ab9d6a43 1d0857401880 fd78a96b5164 a65395e30f8e 03154e30d8a2 6c8b9d467621 ffbfaa032774 5653f77280da 47adf6bc9949 ff8ed0ab8632 36890e67d5c5
	I0602 11:15:45.062716   14877 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:15:45.072553   14877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:15:45.079955   14877 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  2 18:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  2 18:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  2 18:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  2 18:15 /etc/kubernetes/scheduler.conf
	
	I0602 11:15:45.080002   14877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 11:15:45.087183   14877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 11:15:45.094366   14877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 11:15:45.101335   14877 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:45.101383   14877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 11:15:45.108132   14877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 11:15:45.115209   14877 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:45.115257   14877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 11:15:45.122012   14877 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:15:45.129423   14877 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:15:45.129435   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:45.173306   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:46.061629   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:46.181208   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:46.227664   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:46.274346   14877 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:15:46.274404   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:46.783969   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:47.284127   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:47.783755   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:47.793675   14877 api_server.go:71] duration metric: took 1.519303202s to wait for apiserver process to appear ...
	I0602 11:15:47.793698   14877 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:15:47.793711   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:50.014271   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 11:15:50.014296   14877 api_server.go:102] status: https://127.0.0.1:53985/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 11:15:50.516427   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:50.524747   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:15:50.524768   14877 api_server.go:102] status: https://127.0.0.1:53985/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:15:51.014458   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:51.020688   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:15:51.020708   14877 api_server.go:102] status: https://127.0.0.1:53985/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:15:51.514427   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:51.520036   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 200:
	ok
	I0602 11:15:51.527363   14877 api_server.go:140] control plane version: v1.23.6
	I0602 11:15:51.527374   14877 api_server.go:130] duration metric: took 3.73360754s to wait for apiserver health ...
	I0602 11:15:51.527381   14877 cni.go:95] Creating CNI manager for ""
	I0602 11:15:51.527386   14877 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:15:51.527396   14877 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:15:51.534273   14877 system_pods.go:59] 9 kube-system pods found
	I0602 11:15:51.534290   14877 system_pods.go:61] "coredns-64897985d-ckpbd" [f940716f-dc7a-4f33-a9e3-f89b1bbf3a7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 11:15:51.534295   14877 system_pods.go:61] "coredns-64897985d-dk92c" [d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 11:15:51.534299   14877 system_pods.go:61] "etcd-newest-cni-20220602111446-2113" [45343802-230f-4002-83ee-3028731601ed] Running
	I0602 11:15:51.534304   14877 system_pods.go:61] "kube-apiserver-newest-cni-20220602111446-2113" [e45d4f13-ffeb-448d-a62c-1535c7511193] Running
	I0602 11:15:51.534307   14877 system_pods.go:61] "kube-controller-manager-newest-cni-20220602111446-2113" [603546ac-2b33-4b29-a2d3-efcaff1925e6] Running
	I0602 11:15:51.534313   14877 system_pods.go:61] "kube-proxy-5sjvd" [91df91a9-1e57-4106-a94a-dc45614445f1] Running
	I0602 11:15:51.534318   14877 system_pods.go:61] "kube-scheduler-newest-cni-20220602111446-2113" [14624cc4-0799-4a60-a5b9-f158f628b2be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 11:15:51.534322   14877 system_pods.go:61] "metrics-server-b955d9d8-2jrzg" [91ec99de-3cb5-41b9-b2a1-954f97a3c052] Pending
	I0602 11:15:51.534327   14877 system_pods.go:61] "storage-provisioner" [5ca87ae3-29fb-44fb-aaf4-0a375381b9fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:15:51.534331   14877 system_pods.go:74] duration metric: took 6.930904ms to wait for pod list to return data ...
	I0602 11:15:51.534336   14877 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:15:51.537064   14877 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:15:51.537078   14877 node_conditions.go:123] node cpu capacity is 6
	I0602 11:15:51.537090   14877 node_conditions.go:105] duration metric: took 2.749266ms to run NodePressure ...
	I0602 11:15:51.537102   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:51.759154   14877 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 11:15:51.767481   14877 ops.go:34] apiserver oom_adj: -16
	I0602 11:15:51.767493   14877 kubeadm.go:630] restartCluster took 9.883428446s
	I0602 11:15:51.767500   14877 kubeadm.go:397] StartCluster complete in 9.920581795s
	I0602 11:15:51.767516   14877 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:15:51.767607   14877 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:15:51.768233   14877 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:15:51.771735   14877 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220602111446-2113" rescaled to 1
	I0602 11:15:51.771794   14877 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 11:15:51.771823   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 11:15:51.771832   14877 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 11:15:51.771913   14877 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220602111446-2113"
	I0602 11:15:51.816358   14877 out.go:177] * Verifying Kubernetes components...
	I0602 11:15:51.771929   14877 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220602111446-2113"
	I0602 11:15:51.771941   14877 addons.go:65] Setting dashboard=true in profile "newest-cni-20220602111446-2113"
	I0602 11:15:51.837567   14877 addons.go:153] Setting addon dashboard=true in "newest-cni-20220602111446-2113"
	W0602 11:15:51.837582   14877 addons.go:165] addon dashboard should already be in state true
	I0602 11:15:51.771966   14877 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220602111446-2113"
	I0602 11:15:51.772079   14877 config.go:178] Loaded profile config "newest-cni-20220602111446-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:15:51.837631   14877 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220602111446-2113"
	I0602 11:15:51.837628   14877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:15:51.837654   14877 host.go:66] Checking if "newest-cni-20220602111446-2113" exists ...
	I0602 11:15:51.816386   14877 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220602111446-2113"
	W0602 11:15:51.837707   14877 addons.go:165] addon storage-provisioner should already be in state true
	I0602 11:15:51.816393   14877 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220602111446-2113"
	I0602 11:15:51.837774   14877 host.go:66] Checking if "newest-cni-20220602111446-2113" exists ...
	W0602 11:15:51.837780   14877 addons.go:165] addon metrics-server should already be in state true
	I0602 11:15:51.837889   14877 host.go:66] Checking if "newest-cni-20220602111446-2113" exists ...
	I0602 11:15:51.838181   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:51.840623   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:51.840900   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:51.842643   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:51.955694   14877 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0602 11:15:51.955752   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:51.969552   14877 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 11:15:52.005992   14877 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0602 11:15:52.043252   14877 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 11:15:52.080422   14877 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:15:52.154281   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 11:15:52.212028   14877 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 11:15:52.154328   14877 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 11:15:52.154398   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:52.156569   14877 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220602111446-2113"
	W0602 11:15:52.212060   14877 addons.go:165] addon default-storageclass should already be in state true
	I0602 11:15:52.212093   14877 host.go:66] Checking if "newest-cni-20220602111446-2113" exists ...
	I0602 11:15:52.212089   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 11:15:52.249422   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 11:15:52.249441   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 11:15:52.249519   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:52.249535   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:52.252007   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:52.257329   14877 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:15:52.257427   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:52.283298   14877 api_server.go:71] duration metric: took 511.464425ms to wait for apiserver process to appear ...
	I0602 11:15:52.283323   14877 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:15:52.283343   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:52.295031   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 200:
	ok
	I0602 11:15:52.297658   14877 api_server.go:140] control plane version: v1.23.6
	I0602 11:15:52.297677   14877 api_server.go:130] duration metric: took 14.346421ms to wait for apiserver health ...
	I0602 11:15:52.297686   14877 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:15:52.303691   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:52.306439   14877 system_pods.go:59] 9 kube-system pods found
	I0602 11:15:52.306469   14877 system_pods.go:61] "coredns-64897985d-ckpbd" [f940716f-dc7a-4f33-a9e3-f89b1bbf3a7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 11:15:52.306490   14877 system_pods.go:61] "coredns-64897985d-dk92c" [d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 11:15:52.306512   14877 system_pods.go:61] "etcd-newest-cni-20220602111446-2113" [45343802-230f-4002-83ee-3028731601ed] Running
	I0602 11:15:52.306525   14877 system_pods.go:61] "kube-apiserver-newest-cni-20220602111446-2113" [e45d4f13-ffeb-448d-a62c-1535c7511193] Running
	I0602 11:15:52.306532   14877 system_pods.go:61] "kube-controller-manager-newest-cni-20220602111446-2113" [603546ac-2b33-4b29-a2d3-efcaff1925e6] Running
	I0602 11:15:52.306541   14877 system_pods.go:61] "kube-proxy-5sjvd" [91df91a9-1e57-4106-a94a-dc45614445f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0602 11:15:52.306550   14877 system_pods.go:61] "kube-scheduler-newest-cni-20220602111446-2113" [14624cc4-0799-4a60-a5b9-f158f628b2be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 11:15:52.306559   14877 system_pods.go:61] "metrics-server-b955d9d8-2jrzg" [91ec99de-3cb5-41b9-b2a1-954f97a3c052] Pending
	I0602 11:15:52.306568   14877 system_pods.go:61] "storage-provisioner" [5ca87ae3-29fb-44fb-aaf4-0a375381b9fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:15:52.306577   14877 system_pods.go:74] duration metric: took 8.88473ms to wait for pod list to return data ...
	I0602 11:15:52.306586   14877 default_sa.go:34] waiting for default service account to be created ...
	I0602 11:15:52.310095   14877 default_sa.go:45] found service account: "default"
	I0602 11:15:52.310110   14877 default_sa.go:55] duration metric: took 3.519074ms for default service account to be created ...
	I0602 11:15:52.310121   14877 kubeadm.go:572] duration metric: took 538.29283ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0602 11:15:52.310147   14877 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:15:52.356332   14877 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:15:52.356351   14877 node_conditions.go:123] node cpu capacity is 6
	I0602 11:15:52.356386   14877 node_conditions.go:105] duration metric: took 46.217393ms to run NodePressure ...
	I0602 11:15:52.356401   14877 start.go:213] waiting for startup goroutines ...
	I0602 11:15:52.358408   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:52.359033   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:52.360925   14877 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 11:15:52.360941   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 11:15:52.361030   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:52.441993   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:52.474316   14877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:15:52.479895   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 11:15:52.479910   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 11:15:52.480769   14877 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 11:15:52.480785   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 11:15:52.563865   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 11:15:52.563879   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 11:15:52.568118   14877 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 11:15:52.568135   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 11:15:52.580618   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 11:15:52.580636   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 11:15:52.593643   14877 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:15:52.593657   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 11:15:52.602278   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 11:15:52.602293   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 11:15:52.663623   14877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:15:52.666950   14877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 11:15:52.675063   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 11:15:52.675079   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 11:15:52.756663   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 11:15:52.756681   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 11:15:52.775832   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 11:15:52.775845   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 11:15:52.861949   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 11:15:52.861964   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 11:15:52.880054   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:15:52.880069   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 11:15:52.899051   14877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:15:53.757458   14877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.283090034s)
	I0602 11:15:53.758436   14877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.09477051s)
	I0602 11:15:53.758457   14877 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220602111446-2113"
	I0602 11:15:53.758478   14877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.091493481s)
	I0602 11:15:53.900139   14877 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0602 11:15:53.974108   14877 addons.go:417] enableAddons completed in 2.202246908s
	I0602 11:15:54.006631   14877 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 11:15:54.028219   14877 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220602111446-2113" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:15:38 UTC, end at Thu 2022-06-02 18:16:31 UTC. --
	Jun 02 18:15:38 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:38.696401300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 18:15:38 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:38.704947040Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 02 18:15:38 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:38.709949562Z" level=info msg="Loading containers: start."
	Jun 02 18:15:38 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:38.912880972Z" level=info msg="Removing stale sandbox 66dfe53b6c34b9506d8623780a11dd9a311f59d953fefe7a81d6414df01d910a (ff8ed0ab86325d80a79fb4fc4908c68c2f4ce0f34602a39d925950a1a29887a8)"
	Jun 02 18:15:38 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:38.914334752Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 441dec57b17cc0259c5d1059688a6f1c2c67390d142b5464ea107f2608e80bb3 2fab27787849e90141062f1f48a8254b0a6837762a93717dd9457a6204c508bc], retrying...."
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.000769842Z" level=info msg="Removing stale sandbox da8cf702edd84cb54c7b6b79e10a388e7facbd33ec05cf6bbb7e1cb6b11d5b56 (5653f77280da16a657265f8bac7ea06e73f7c42c5f70835e56bb71a69ed8b3da)"
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.002330637Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 441dec57b17cc0259c5d1059688a6f1c2c67390d142b5464ea107f2608e80bb3 5db233f9961d3b8196e55acb7847badf96d10bcb6ee3c72bdc5ae01eacd119c9], retrying...."
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.093539549Z" level=info msg="Removing stale sandbox ec7a31e46f2e41f759c002c0b6599dd84b32e0dfbdb9520cc726ef9b2bea9b4f (617be3b7501b0bbf4d583a9d14f1e52713d557f6f1028f996f3c13e91840aa87)"
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.094683044Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 441dec57b17cc0259c5d1059688a6f1c2c67390d142b5464ea107f2608e80bb3 13a7085309cd58bb3ff355790b5ce32f206af63d7c4606ca04702844d900bed4], retrying...."
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.116917756Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.152494087Z" level=info msg="Loading containers: done."
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.160960289Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.161029530Z" level=info msg="Daemon has completed initialization"
	Jun 02 18:15:39 newest-cni-20220602111446-2113 systemd[1]: Started Docker Application Container Engine.
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.181252144Z" level=info msg="API listen on [::]:2376"
	Jun 02 18:15:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:39.183661464Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 02 18:15:51 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:51.588674615Z" level=info msg="ignoring event" container=639e8fe7ce478f4b16d502f27cf4528cb3b3d40c593ffce44abcad43e935bed6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:52 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:52.505559659Z" level=info msg="ignoring event" container=074836b3fefc3db47b243278a8384f52260d287d18c363cc0f7d8063bb7905fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:52 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:52.512909313Z" level=info msg="ignoring event" container=0b61efa28740f8b9e65345ce858f8fecdcaf8890aab420aa883e480511acf921 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:53 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:53.417778633Z" level=info msg="ignoring event" container=30beb54646e1553255329881b57dd73179a6e4bf7238a10b5e7d2d32bb2beb80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:53 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:53.505204677Z" level=info msg="ignoring event" container=20a24f71f42a8ba5ebace85b2208d3abafcd18e16cfca2185d366e1e631c6216 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:54 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:54.296176630Z" level=info msg="ignoring event" container=186057c849deea2576ab6a8eeb6a66e0efc4a5309d5d167dc397552ea5c63840 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:54 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:54.311054215Z" level=info msg="ignoring event" container=0fa44147f5efe241c0cb4a601c290677ab6e6cd3f7543856340b206e16904bea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:55 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:55.146073311Z" level=info msg="ignoring event" container=0187171882e1ded642a15b4144559bc4cd4674d1dd100bae0c92d848090c98e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:55 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:55.164441055Z" level=info msg="ignoring event" container=4817f21a87a85365e3237ef698b41f9dacd1d3d691a4fb4d4dcb7ed97fbec9a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	6c76a8787c875       6e38f40d628db       40 seconds ago       Running             storage-provisioner       1                   d865c91d9d622
	42ba4c375d74a       4c03754524064       40 seconds ago       Running             kube-proxy                1                   d87e70b57510e
	c07f121101bc9       df7b72818ad2e       44 seconds ago       Running             kube-controller-manager   1                   8046b20b0d10a
	d2d625f83230a       595f327f224a4       44 seconds ago       Running             kube-scheduler            1                   f4bf16b662e3f
	ccb03438c98a2       25f8c7f3da61c       44 seconds ago       Running             etcd                      1                   942942998bd72
	4242a8ad99d05       8fa62c12256df       44 seconds ago       Running             kube-apiserver            1                   3d94a3f3b5cc2
	8fa6bceffcfa2       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   617be3b7501b0
	f293ab9d6a43f       4c03754524064       About a minute ago   Exited              kube-proxy                0                   1d08574018804
	a65395e30f8ed       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   36890e67d5c54
	03154e30d8a22       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   5653f77280da1
	6c8b9d467621a       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   ff8ed0ab86325
	ffbfaa032774b       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   47adf6bc99493
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220602111446-2113
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220602111446-2113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=newest-cni-20220602111446-2113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T11_15_08_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 18:15:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220602111446-2113
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 18:16:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 18:16:29 +0000   Thu, 02 Jun 2022 18:15:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 18:16:29 +0000   Thu, 02 Jun 2022 18:15:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 18:16:29 +0000   Thu, 02 Jun 2022 18:15:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 18:16:29 +0000   Thu, 02 Jun 2022 18:16:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220602111446-2113
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                b8a9c196-0f67-4278-a87a-69d0d4fb8109
	  Boot ID:                    a475dd08-72ba-4c6d-89c1-75a58adc3783
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-dk92c                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     70s
	  kube-system                 etcd-newest-cni-20220602111446-2113                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-newest-cni-20220602111446-2113             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-newest-cni-20220602111446-2113    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-5sjvd                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-scheduler-newest-cni-20220602111446-2113             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 metrics-server-b955d9d8-2jrzg                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-xvrdt                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-c78zj                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 69s                kube-proxy  
	  Normal  Starting                 40s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    89s (x5 over 89s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  89s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     89s (x4 over 89s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  89s (x5 over 89s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientMemory
	  Normal  Starting                 83s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                72s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeReady
	  Normal  Starting                 45s                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    45s (x7 over 45s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  45s (x7 over 45s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s                 kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet     Node newest-cni-20220602111446-2113 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2s                 kubelet     Node newest-cni-20220602111446-2113 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [a65395e30f8e] <==
	* {"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220602111446-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:15:03.526Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:15:03.526Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T18:15:03.526Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:15:03.529Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:03.529Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:03.529Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:03.530Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T18:15:24.648Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-02T18:15:24.648Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220602111446-2113","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/06/02 18:15:24 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 18:15:24 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-02T18:15:24.694Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-06-02T18:15:24.696Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:15:24.698Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:15:24.698Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220602111446-2113","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> etcd [ccb03438c98a] <==
	* {"level":"info","ts":"2022-06-02T18:15:47.429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-06-02T18:15:47.429Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-02T18:15:47.429Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:47.429Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-02T18:15:48.519Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220602111446-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:15:48.519Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:15:48.519Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:15:48.520Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:15:48.520Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T18:15:48.520Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:15:48.520Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T18:15:52.172Z","caller":"traceutil/trace.go:171","msg":"trace[1883045834] linearizableReadLoop","detail":"{readStateIndex:553; appliedIndex:553; }","duration":"194.706539ms","start":"2022-06-02T18:15:51.978Z","end":"2022-06-02T18:15:52.172Z","steps":["trace[1883045834] 'read index received'  (duration: 194.701511ms)","trace[1883045834] 'applied index is now lower than readState.Index'  (duration: 4.486µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T18:15:52.173Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"194.950974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:1 size:994"}
	{"level":"info","ts":"2022-06-02T18:15:52.173Z","caller":"traceutil/trace.go:171","msg":"trace[1044261052] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:522; }","duration":"195.085941ms","start":"2022-06-02T18:15:51.978Z","end":"2022-06-02T18:15:52.173Z","steps":["trace[1044261052] 'agreement among raft nodes before linearized reading'  (duration: 194.916033ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T18:15:52.270Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"223.357707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-02T18:15:52.270Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"291.370392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-20220602111446-2113\" ","response":"range_response_count:1 size:7385"}
	{"level":"info","ts":"2022-06-02T18:15:52.270Z","caller":"traceutil/trace.go:171","msg":"trace[439630751] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-20220602111446-2113; range_end:; response_count:1; response_revision:523; }","duration":"291.436358ms","start":"2022-06-02T18:15:51.978Z","end":"2022-06-02T18:15:52.270Z","steps":["trace[439630751] 'agreement among raft nodes before linearized reading'  (duration: 291.315533ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T18:15:52.270Z","caller":"traceutil/trace.go:171","msg":"trace[418227208] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:523; }","duration":"223.425321ms","start":"2022-06-02T18:15:52.046Z","end":"2022-06-02T18:15:52.270Z","steps":["trace[418227208] 'agreement among raft nodes before linearized reading'  (duration: 223.31959ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:16:32 up  1:04,  0 users,  load average: 1.64, 1.22, 1.12
	Linux newest-cni-20220602111446-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [4242a8ad99d0] <==
	* I0602 18:15:50.115261       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0602 18:15:50.115437       1 cache.go:39] Caches are synced for autoregister controller
	I0602 18:15:50.117613       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0602 18:15:50.128575       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0602 18:15:50.128634       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0602 18:15:50.138633       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0602 18:15:51.014538       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0602 18:15:51.014554       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 18:15:51.020507       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0602 18:15:51.187003       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:15:51.187105       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:15:51.187113       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0602 18:15:51.273981       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 18:15:51.690701       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 18:15:51.718939       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 18:15:51.744154       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 18:15:51.756375       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 18:15:51.761139       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 18:15:53.695482       1 controller.go:611] quota admission added evaluator for: namespaces
	I0602 18:15:53.876714       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.171.28]
	I0602 18:15:53.886162       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.98.177]
	I0602 18:16:28.662601       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 18:16:29.436054       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 18:16:29.751850       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [6c8b9d467621] <==
	* W0602 18:15:34.084965       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.102387       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.119137       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.122532       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.163605       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.175540       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.201099       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.207641       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.207996       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.216898       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.284171       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.364660       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.367232       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.387876       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.520083       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.531265       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.533079       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.533112       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.539091       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.541722       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.567920       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.592062       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.615766       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.640781       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.643915       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [c07f121101bc] <==
	* I0602 18:16:29.447981       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 18:16:29.448059       1 shared_informer.go:247] Caches are synced for job 
	I0602 18:16:29.448290       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0602 18:16:29.448364       1 shared_informer.go:247] Caches are synced for disruption 
	I0602 18:16:29.448373       1 disruption.go:371] Sending events to api server.
	I0602 18:16:29.456460       1 shared_informer.go:247] Caches are synced for namespace 
	I0602 18:16:29.535066       1 shared_informer.go:247] Caches are synced for service account 
	I0602 18:16:29.535083       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0602 18:16:29.539615       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 18:16:29.561468       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 18:16:29.571101       1 shared_informer.go:247] Caches are synced for taint 
	I0602 18:16:29.571162       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0602 18:16:29.571182       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0602 18:16:29.571204       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220602111446-2113. Assuming now as a timestamp.
	I0602 18:16:29.571222       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0602 18:16:29.571505       1 event.go:294] "Event occurred" object="newest-cni-20220602111446-2113" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220602111446-2113 event: Registered Node newest-cni-20220602111446-2113 in Controller"
	I0602 18:16:29.573957       1 shared_informer.go:247] Caches are synced for attach detach 
	I0602 18:16:29.659755       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0602 18:16:29.755875       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	I0602 18:16:29.755964       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 18:16:29.903736       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-c78zj"
	I0602 18:16:29.904921       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-xvrdt"
	I0602 18:16:30.059480       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 18:16:30.063806       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 18:16:30.063837       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [ffbfaa032774] <==
	* I0602 18:15:20.885949       1 shared_informer.go:247] Caches are synced for job 
	I0602 18:15:20.889811       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0602 18:15:20.891118       1 shared_informer.go:247] Caches are synced for namespace 
	I0602 18:15:20.894446       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0602 18:15:20.900107       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5sjvd"
	I0602 18:15:20.941193       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0602 18:15:20.949550       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0602 18:15:20.954060       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 18:15:21.082701       1 shared_informer.go:247] Caches are synced for disruption 
	I0602 18:15:21.082758       1 disruption.go:371] Sending events to api server.
	I0602 18:15:21.095312       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 18:15:21.096838       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 18:15:21.135721       1 shared_informer.go:247] Caches are synced for stateful set 
	I0602 18:15:21.244824       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 18:15:21.515067       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 18:15:21.533842       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 18:15:21.534040       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 18:15:21.697906       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-ckpbd"
	I0602 18:15:21.702597       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-dk92c"
	I0602 18:15:21.702652       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 18:15:21.717257       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-ckpbd"
	I0602 18:15:23.993729       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 18:15:23.995505       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0602 18:15:24.002244       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0602 18:15:24.009766       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-2jrzg"
	
	* 
	* ==> kube-proxy [42ba4c375d74] <==
	* I0602 18:15:51.252640       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:15:51.252695       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:15:51.252721       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:15:51.268752       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:15:51.268795       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:15:51.268803       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:15:51.268818       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:15:51.269424       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:15:51.270014       1 config.go:317] "Starting service config controller"
	I0602 18:15:51.270054       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:15:51.270067       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:15:51.270070       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:15:51.370278       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 18:15:51.370305       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [f293ab9d6a43] <==
	* I0602 18:15:22.069746       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:15:22.069835       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:15:22.069880       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:15:22.107667       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:15:22.107702       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:15:22.107708       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:15:22.107717       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:15:22.108076       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:15:22.108947       1 config.go:317] "Starting service config controller"
	I0602 18:15:22.109009       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:15:22.109076       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:15:22.109082       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:15:22.209084       1 shared_informer.go:247] Caches are synced for service config 
	I0602 18:15:22.209178       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [03154e30d8a2] <==
	* E0602 18:15:05.802474       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0602 18:15:05.802442       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:15:05.802525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:15:05.802607       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:15:05.802565       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 18:15:05.802636       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:15:05.802646       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 18:15:05.802687       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 18:15:05.802715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 18:15:05.802877       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 18:15:05.802917       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 18:15:05.804277       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 18:15:05.804293       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 18:15:06.648664       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0602 18:15:06.648717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0602 18:15:06.650846       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 18:15:06.650893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 18:15:06.719512       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:15:06.719548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:15:06.827391       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 18:15:06.827435       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0602 18:15:07.334530       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0602 18:15:24.711628       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 18:15:24.711797       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0602 18:15:24.715962       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [d2d625f83230] <==
	* W0602 18:15:47.505147       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0602 18:15:48.027837       1 serving.go:348] Generated self-signed cert in-memory
	W0602 18:15:50.045173       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0602 18:15:50.045555       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 18:15:50.045814       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0602 18:15:50.046899       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0602 18:15:50.054889       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0602 18:15:50.077456       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0602 18:15:50.077467       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 18:15:50.077535       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0602 18:15:50.078815       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0602 18:15:50.178818       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:15:38 UTC, end at Thu 2022-06-02 18:16:34 UTC. --
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133395    3766 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\" network for pod \"dashboard-metrics-scraper-56974995fc-xvrdt\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\" network for pod \"dashboard-metrics-scraper-56974995fc-xvrdt\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-6819864bba102705def8bed6 -m comment --comment name: \"crio\" id: \"72721ca1e7a3a70b36a8777
563e67443ef776ff7e1502640f4a99b82f5fdf6cf\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-6819864bba102705def8bed6':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133425    3766 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\" network for pod \"dashboard-metrics-scraper-56974995fc-xvrdt\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\" network for pod \"dashboard-metrics-scraper-56974995fc-xvrdt\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-6819864bba102705def8bed6 -m comment --comment name: \"crio\" id: \"72721ca1e7a3a70b36a8777563e6
7443ef776ff7e1502640f4a99b82f5fdf6cf\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-6819864bba102705def8bed6':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-xvrdt"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133448    3766 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\" network for pod \"dashboard-metrics-scraper-56974995fc-xvrdt\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\" network for pod \"dashboard-metrics-scraper-56974995fc-xvrdt\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-6819864bba102705def8bed6 -m comment --comment name: \"crio\" id: \"72721ca1e7a3a70b36a8777563e6
7443ef776ff7e1502640f4a99b82f5fdf6cf\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-6819864bba102705def8bed6':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-xvrdt"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133499    3766 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard(8075c8ab-6ca7-4ff5-91f0-27be759fd491)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard(8075c8ab-6ca7-4ff5-91f0-27be759fd491)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\\\" network for pod \\\"dashboard-metrics-scraper-56974995fc-xvrdt\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\\\" network for pod \\\"dashb
oard-metrics-scraper-56974995fc-xvrdt\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-6819864bba102705def8bed6 -m comment --comment name: \\\"crio\\\" id: \\\"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-6819864bba102705def8bed6':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-xvrdt" podUID=8075c8ab-6ca7-4ff5-91f0-27be759fd491
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133397    3766 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to set up pod \"coredns-64897985d-dk92c_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to teardown pod \"coredns-64897985d-dk92c_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-f44ebedaebf7699844beee1b -m comment --comment name: \"crio\" id: \"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-f44ebedaebf7699844beee1b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-dk92c"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133536    3766 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to set up pod \"coredns-64897985d-dk92c_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to teardown pod \"coredns-64897985d-dk92c_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-f44ebedaebf7699844beee1b -m comment --comment name: \"crio\" id: \"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-f44ebedaebf7699844beee1b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-dk92c"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133573    3766 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-dk92c_kube-system(d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-dk92c_kube-system(d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\\\" network for pod \\\"coredns-64897985d-dk92c\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-dk92c_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\\\" network for pod \\\"coredns-64897985d-dk92c\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-dk92c_kube-syste
m\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-f44ebedaebf7699844beee1b -m comment --comment name: \\\"crio\\\" id: \\\"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f44ebedaebf7699844beee1b':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-dk92c" podUID=d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133357    3766 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-884d90612ab45be8dcb1473b -m comment --comment name: \"crio\" id: \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\" --wait]: exit status 2: ip
tables v1.8.4 (legacy): Couldn't load target `CNI-884d90612ab45be8dcb1473b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133607    3766 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-884d90612ab45be8dcb1473b -m comment --comment name: \"crio\" id: \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-884d90612ab45be8dcb1473b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-2jrzg"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133625    3766 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-884d90612ab45be8dcb1473b -m comment --comment name: \"crio\" id: \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-884d90612ab45be8dcb1473b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-2jrzg"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.133656    3766 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-b955d9d8-2jrzg_kube-system(91ec99de-3cb5-41b9-b2a1-954f97a3c052)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-b955d9d8-2jrzg_kube-system(91ec99de-3cb5-41b9-b2a1-954f97a3c052)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\\\" network for pod \\\"metrics-server-b955d9d8-2jrzg\\\": networkPlugin cni failed to set up pod \\\"metrics-server-b955d9d8-2jrzg_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\\\" network for pod \\\"metrics-server-b955d9d8-2jrzg\\\": networkPlugin cni failed to teardown pod \\\"metr
ics-server-b955d9d8-2jrzg_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-884d90612ab45be8dcb1473b -m comment --comment name: \\\"crio\\\" id: \\\"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-884d90612ab45be8dcb1473b':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-b955d9d8-2jrzg" podUID=91ec99de-3cb5-41b9-b2a1-954f97a3c052
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.237474    3766 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj" podSandboxID={Type:docker ID:44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d} podNetnsPath="/proc/4955/ns/net" networkType="bridge" networkName="crio"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.268289    3766 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-4fd11a8c0e8de3475827c997 -m comment --comment name: \"crio\" id: \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4fd11a8c0e8de3475827c997':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj" podSandboxID={Type:docker ID:44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d} podNetnsPath="/proc/4955/ns/net" networkType="bridge" networkName="crio"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.317671    3766 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-newest-cni-20220602111446-2113\" already exists" pod="kube-system/kube-apiserver-newest-cni-20220602111446-2113"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.557024    3766 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-newest-cni-20220602111446-2113\" already exists" pod="kube-system/kube-controller-manager-newest-cni-20220602111446-2113"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.564895    3766 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-4fd11a8c0e8de3475827c997 -m comment --comment name: \"crio\" id: \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752
ba89fb6fd19c5023d\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4fd11a8c0e8de3475827c997':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.565142    3766 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-4fd11a8c0e8de3475827c997 -m comment --comment name: \"crio\" id: \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89f
b6fd19c5023d\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4fd11a8c0e8de3475827c997':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.565223    3766 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-4fd11a8c0e8de3475827c997 -m comment --comment name: \"crio\" id: \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89f
b6fd19c5023d\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4fd11a8c0e8de3475827c997':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:33.565350    3766 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard(2c625cf2-f4a4-4638-8595-d6f3b0abeb10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard(2c625cf2-f4a4-4638-8595-d6f3b0abeb10)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\\\" network for pod \\\"kubernetes-dashboard-cd7c84bfc-c78zj\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\\\" network for pod \\\"kubernetes-dashboard-cd7c84bf
c-c78zj\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-4fd11a8c0e8de3475827c997 -m comment --comment name: \\\"crio\\\" id: \\\"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4fd11a8c0e8de3475827c997':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj" podUID=2c625cf2-f4a4-4638-8595-d6f3b0abeb10
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:33.908073    3766 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\""
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:33.909748    3766 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:33.911649    3766 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d\""
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:33.911681    3766 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313\""
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:33.911892    3766 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a\""
	Jun 02 18:16:33 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:33.912447    3766 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf\""
	
	* 
	* ==> storage-provisioner [6c76a8787c87] <==
	* I0602 18:15:52.501288       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:15:52.511315       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:15:52.511345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:16:28.665674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:16:28.665826       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220602111446-2113_c8f6321e-b7fd-4745-8b00-2079c78117fe!
	I0602 18:16:28.665911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00c1aff1-f963-41db-9864-6fe44e16f73a", APIVersion:"v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220602111446-2113_c8f6321e-b7fd-4745-8b00-2079c78117fe became leader
	I0602 18:16:28.766730       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220602111446-2113_c8f6321e-b7fd-4745-8b00-2079c78117fe!
	
	* 
	* ==> storage-provisioner [8fa6bceffcfa] <==
	* I0602 18:15:24.155902       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:15:24.163106       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:15:24.163175       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:15:24.200071       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00c1aff1-f963-41db-9864-6fe44e16f73a", APIVersion:"v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220602111446-2113_491d83a5-d6e1-4929-80ef-65ea73f46f26 became leader
	I0602 18:15:24.200830       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:15:24.201051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220602111446-2113_491d83a5-d6e1-4929-80ef-65ea73f46f26!
	I0602 18:15:24.301651       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220602111446-2113_491d83a5-d6e1-4929-80ef-65ea73f46f26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220602111446-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context newest-cni-20220602111446-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.382507948s)
helpers_test.go:270: non-running pods: coredns-64897985d-dk92c metrics-server-b955d9d8-2jrzg dashboard-metrics-scraper-56974995fc-xvrdt kubernetes-dashboard-cd7c84bfc-c78zj
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220602111446-2113 describe pod coredns-64897985d-dk92c metrics-server-b955d9d8-2jrzg dashboard-metrics-scraper-56974995fc-xvrdt kubernetes-dashboard-cd7c84bfc-c78zj
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220602111446-2113 describe pod coredns-64897985d-dk92c metrics-server-b955d9d8-2jrzg dashboard-metrics-scraper-56974995fc-xvrdt kubernetes-dashboard-cd7c84bfc-c78zj: exit status 1 (198.246165ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-dk92c" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-2jrzg" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-xvrdt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-cd7c84bfc-c78zj" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220602111446-2113 describe pod coredns-64897985d-dk92c metrics-server-b955d9d8-2jrzg dashboard-metrics-scraper-56974995fc-xvrdt kubernetes-dashboard-cd7c84bfc-c78zj: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220602111446-2113
helpers_test.go:235: (dbg) docker inspect newest-cni-20220602111446-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30",
	        "Created": "2022-06-02T18:14:53.071653941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:15:38.429000634Z",
	            "FinishedAt": "2022-06-02T18:15:36.450648512Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30/hosts",
	        "LogPath": "/var/lib/docker/containers/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30/f3555833564687857c958bc9235bd3dbc9a1d50fb5d1ed0f38d79f116a0f1b30-json.log",
	        "Name": "/newest-cni-20220602111446-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220602111446-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220602111446-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ba2cea67523c33ad0b49af10e5240c2230f214aeb2d5d658755f1055661da5ff-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba2cea67523c33ad0b49af10e5240c2230f214aeb2d5d658755f1055661da5ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba2cea67523c33ad0b49af10e5240c2230f214aeb2d5d658755f1055661da5ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba2cea67523c33ad0b49af10e5240c2230f214aeb2d5d658755f1055661da5ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220602111446-2113",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220602111446-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220602111446-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220602111446-2113",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220602111446-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d09d03dce3b8d3f5936a98cf2ceea7fbefd2b4ddf42cc4f9dedc11ff734d55c8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53982"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53983"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53984"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53985"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d09d03dce3b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220602111446-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f35558335646",
	                        "newest-cni-20220602111446-2113"
	                    ],
	                    "NetworkID": "666a37f7840188b1f9b0f32678d9a5bc2c4b1c17547ec3fd4a4cd1090a45f919",
	                    "EndpointID": "ba9648796e12ff373bad6a847e3b0164286f95e7cc1692f5d621b0b70a7a564e",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220602111446-2113 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220602111446-2113 logs -n 25: (5.011654581s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | no-preload-20220602105919-2113                             | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220602105919-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | no-preload-20220602105919-2113                             |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:07 PDT | 02 Jun 22 11:07 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:08 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220602105906-2113                        | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:12 PDT | 02 Jun 22 11:13 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:08 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:13 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:13 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220602110711-2113             | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220602110711-2113             | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220602111446-2113                             | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:15:37
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:15:37.112400   14877 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:15:37.112636   14877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:15:37.112642   14877 out.go:309] Setting ErrFile to fd 2...
	I0602 11:15:37.112646   14877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:15:37.112746   14877 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:15:37.113006   14877 out.go:303] Setting JSON to false
	I0602 11:15:37.128139   14877 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4506,"bootTime":1654189231,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:15:37.128239   14877 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:15:37.166432   14877 out.go:177] * [newest-cni-20220602111446-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:15:37.204201   14877 notify.go:193] Checking for updates...
	I0602 11:15:37.226160   14877 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:15:37.247932   14877 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:15:37.269028   14877 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:15:37.311109   14877 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:15:37.332132   14877 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:15:37.354856   14877 config.go:178] Loaded profile config "newest-cni-20220602111446-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:15:37.355487   14877 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:15:37.425618   14877 docker.go:137] docker version: linux-20.10.14
	I0602 11:15:37.425792   14877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:15:37.552837   14877 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:15:37.492069854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:15:37.596427   14877 out.go:177] * Using the docker driver based on existing profile
	I0602 11:15:37.617563   14877 start.go:284] selected driver: docker
	I0602 11:15:37.617592   14877 start.go:806] validating driver "docker" against &{Name:newest-cni-20220602111446-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602111446-2113 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[a
piserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:15:37.617786   14877 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:15:37.621183   14877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:15:37.747027   14877 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:15:37.686581534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:15:37.747205   14877 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0602 11:15:37.747221   14877 cni.go:95] Creating CNI manager for ""
	I0602 11:15:37.747229   14877 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:15:37.747237   14877 start_flags.go:306] config:
	{Name:newest-cni-20220602111446-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602111446-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_
ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:15:37.769359   14877 out.go:177] * Starting control plane node newest-cni-20220602111446-2113 in cluster newest-cni-20220602111446-2113
	I0602 11:15:37.812916   14877 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:15:37.833742   14877 out.go:177] * Pulling base image ...
	I0602 11:15:37.877054   14877 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:15:37.877055   14877 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:15:37.877151   14877 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 11:15:37.877170   14877 cache.go:57] Caching tarball of preloaded images
	I0602 11:15:37.877383   14877 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:15:37.877412   14877 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 11:15:37.878438   14877 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/config.json ...
	I0602 11:15:37.942040   14877 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:15:37.942055   14877 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:15:37.942065   14877 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:15:37.942104   14877 start.go:352] acquiring machines lock for newest-cni-20220602111446-2113: {Name:mk60bd3a84f323b50cc7374421d304aa58ac015f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:15:37.942185   14877 start.go:356] acquired machines lock for "newest-cni-20220602111446-2113" in 57.699µs
	I0602 11:15:37.942204   14877 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:15:37.942214   14877 fix.go:55] fixHost starting: 
	I0602 11:15:37.942447   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:38.009634   14877 fix.go:103] recreateIfNeeded on newest-cni-20220602111446-2113: state=Stopped err=<nil>
	W0602 11:15:38.009662   14877 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:15:38.053451   14877 out.go:177] * Restarting existing docker container for "newest-cni-20220602111446-2113" ...
	I0602 11:15:38.075533   14877 cli_runner.go:164] Run: docker start newest-cni-20220602111446-2113
	I0602 11:15:38.429318   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:38.501319   14877 kic.go:416] container "newest-cni-20220602111446-2113" state is running.
	I0602 11:15:38.501922   14877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602111446-2113
	I0602 11:15:38.576290   14877 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/config.json ...
	I0602 11:15:38.576686   14877 machine.go:88] provisioning docker machine ...
	I0602 11:15:38.576711   14877 ubuntu.go:169] provisioning hostname "newest-cni-20220602111446-2113"
	I0602 11:15:38.576772   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:38.649747   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:38.649929   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:38.649943   14877 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220602111446-2113 && echo "newest-cni-20220602111446-2113" | sudo tee /etc/hostname
	I0602 11:15:38.773710   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220602111446-2113
	
	I0602 11:15:38.773805   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:38.846243   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:38.846480   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:38.846500   14877 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220602111446-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220602111446-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220602111446-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:15:38.972226   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:15:38.972246   14877 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:15:38.972265   14877 ubuntu.go:177] setting up certificates
	I0602 11:15:38.972276   14877 provision.go:83] configureAuth start
	I0602 11:15:38.972349   14877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602111446-2113
	I0602 11:15:39.044589   14877 provision.go:138] copyHostCerts
	I0602 11:15:39.044674   14877 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:15:39.044683   14877 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:15:39.044772   14877 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:15:39.045030   14877 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:15:39.045038   14877 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:15:39.045096   14877 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:15:39.045237   14877 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:15:39.045242   14877 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:15:39.045299   14877 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:15:39.045424   14877 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220602111446-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220602111446-2113]
	I0602 11:15:39.214567   14877 provision.go:172] copyRemoteCerts
	I0602 11:15:39.214630   14877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:15:39.214676   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:39.284765   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:39.370464   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 11:15:39.387895   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:15:39.404256   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0602 11:15:39.420397   14877 provision.go:86] duration metric: configureAuth took 448.099851ms
	I0602 11:15:39.420410   14877 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:15:39.420577   14877 config.go:178] Loaded profile config "newest-cni-20220602111446-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:15:39.420634   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:39.491169   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:39.491311   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:39.491323   14877 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:15:39.605393   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:15:39.605405   14877 ubuntu.go:71] root file system type: overlay
	I0602 11:15:39.605543   14877 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:15:39.605624   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:39.676737   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:39.676897   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:39.676942   14877 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:15:39.800313   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:15:39.800397   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:39.871024   14877 main.go:134] libmachine: Using SSH client type: native
	I0602 11:15:39.871200   14877 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53981 <nil> <nil>}
	I0602 11:15:39.871222   14877 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:15:39.990940   14877 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:15:39.990977   14877 machine.go:91] provisioned docker machine in 1.414258393s
	I0602 11:15:39.990986   14877 start.go:306] post-start starting for "newest-cni-20220602111446-2113" (driver="docker")
	I0602 11:15:39.990993   14877 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:15:39.991058   14877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:15:39.991110   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:40.061996   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:40.147842   14877 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:15:40.151457   14877 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:15:40.151470   14877 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:15:40.151478   14877 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:15:40.151482   14877 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:15:40.151490   14877 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:15:40.151589   14877 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:15:40.151721   14877 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:15:40.151868   14877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:15:40.158741   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:15:40.175787   14877 start.go:309] post-start completed in 184.787196ms
	I0602 11:15:40.175854   14877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:15:40.175902   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:40.246857   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:40.329877   14877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:15:40.334319   14877 fix.go:57] fixHost completed within 2.39206541s
	I0602 11:15:40.334330   14877 start.go:81] releasing machines lock for "newest-cni-20220602111446-2113", held for 2.392096368s
	I0602 11:15:40.334404   14877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602111446-2113
	I0602 11:15:40.405453   14877 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:15:40.405461   14877 ssh_runner.go:195] Run: systemctl --version
	I0602 11:15:40.405541   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:40.405536   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:40.482040   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:40.484877   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:40.692481   14877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:15:40.704677   14877 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:15:40.714842   14877 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:15:40.714890   14877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:15:40.724185   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:15:40.736917   14877 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:15:40.807790   14877 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:15:40.870552   14877 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:15:40.880621   14877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:15:40.943966   14877 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:15:40.953302   14877 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:15:40.988211   14877 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:15:41.069261   14877 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 11:15:41.069447   14877 cli_runner.go:164] Run: docker exec -t newest-cni-20220602111446-2113 dig +short host.docker.internal
	I0602 11:15:41.217049   14877 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:15:41.217148   14877 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:15:41.221595   14877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:15:41.232002   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:41.324236   14877 out.go:177]   - kubelet.network-plugin=cni
	I0602 11:15:41.346204   14877 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0602 11:15:41.368000   14877 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:15:41.368122   14877 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:15:41.399128   14877 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 11:15:41.399144   14877 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:15:41.399220   14877 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:15:41.427956   14877 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 11:15:41.427979   14877 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:15:41.428063   14877 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:15:41.501895   14877 cni.go:95] Creating CNI manager for ""
	I0602 11:15:41.501908   14877 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:15:41.501923   14877 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0602 11:15:41.501936   14877 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220602111446-2113 NodeName:newest-cni-20220602111446-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false]
Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:15:41.502036   14877 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220602111446-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:15:41.502105   14877 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220602111446-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602111446-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 11:15:41.502163   14877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 11:15:41.509619   14877 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:15:41.509669   14877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:15:41.516795   14877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0602 11:15:41.529376   14877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:15:41.541872   14877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2187 bytes)
	I0602 11:15:41.554119   14877 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:15:41.557752   14877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:15:41.567058   14877 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113 for IP: 192.168.58.2
	I0602 11:15:41.567162   14877 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:15:41.567215   14877 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:15:41.567289   14877 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/client.key
	I0602 11:15:41.567348   14877 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/apiserver.key.cee25041
	I0602 11:15:41.567399   14877 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/proxy-client.key
	I0602 11:15:41.567594   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:15:41.567628   14877 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:15:41.567640   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:15:41.567673   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:15:41.567702   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:15:41.567735   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:15:41.567799   14877 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:15:41.568309   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:15:41.585232   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 11:15:41.601960   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:15:41.618739   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/newest-cni-20220602111446-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:15:41.635310   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:15:41.651915   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:15:41.669160   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:15:41.685715   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:15:41.703230   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:15:41.722127   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:15:41.739378   14877 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:15:41.757160   14877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:15:41.769434   14877 ssh_runner.go:195] Run: openssl version
	I0602 11:15:41.774759   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:15:41.782752   14877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:15:41.786642   14877 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:15:41.786684   14877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:15:41.791948   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:15:41.799255   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:15:41.807112   14877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:15:41.811045   14877 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:15:41.811082   14877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:15:41.816215   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:15:41.823551   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:15:41.831043   14877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:15:41.834707   14877 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:15:41.834744   14877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:15:41.839706   14877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:15:41.846754   14877 kubeadm.go:395] StartCluster: {Name:newest-cni-20220602111446-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602111446-2113 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_r
unning:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:15:41.846854   14877 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:15:41.875704   14877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:15:41.883874   14877 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:15:41.883889   14877 kubeadm.go:626] restartCluster start
	I0602 11:15:41.883939   14877 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:15:41.891051   14877 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:41.891119   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:41.963185   14877 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220602111446-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:15:41.963373   14877 kubeconfig.go:127] "newest-cni-20220602111446-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:15:41.963722   14877 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:15:41.965032   14877 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:15:41.972595   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:41.972647   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:41.980555   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.180928   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.181038   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.192169   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.382715   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.382839   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.394533   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.580996   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.581099   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.592126   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.782772   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.782882   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.793804   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:42.982779   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:42.982903   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:42.994213   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.181683   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.181817   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.192930   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.381083   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.381152   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.390694   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.582720   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.582874   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.595350   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.782814   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.782921   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.793934   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:43.982410   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:43.982547   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:43.993503   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.181271   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.181368   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.189799   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.381376   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.381517   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.393282   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.580945   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.581078   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.592087   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.782598   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.782792   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.793424   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.981183   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.981327   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.991725   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.991735   14877 api_server.go:165] Checking apiserver status ...
	I0602 11:15:44.991782   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:15:44.999867   14877 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:44.999879   14877 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:15:44.999886   14877 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:15:44.999942   14877 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:15:45.033438   14877 docker.go:442] Stopping containers: [e1d64bed7589 8fa6bceffcfa 617be3b7501b bdf3626b54a9 8000fe1582b5 f293ab9d6a43 1d0857401880 fd78a96b5164 a65395e30f8e 03154e30d8a2 6c8b9d467621 ffbfaa032774 5653f77280da 47adf6bc9949 ff8ed0ab8632 36890e67d5c5]
	I0602 11:15:45.033509   14877 ssh_runner.go:195] Run: docker stop e1d64bed7589 8fa6bceffcfa 617be3b7501b bdf3626b54a9 8000fe1582b5 f293ab9d6a43 1d0857401880 fd78a96b5164 a65395e30f8e 03154e30d8a2 6c8b9d467621 ffbfaa032774 5653f77280da 47adf6bc9949 ff8ed0ab8632 36890e67d5c5
	I0602 11:15:45.062716   14877 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:15:45.072553   14877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:15:45.079955   14877 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  2 18:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  2 18:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  2 18:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  2 18:15 /etc/kubernetes/scheduler.conf
	
	I0602 11:15:45.080002   14877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 11:15:45.087183   14877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 11:15:45.094366   14877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 11:15:45.101335   14877 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:45.101383   14877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 11:15:45.108132   14877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 11:15:45.115209   14877 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:15:45.115257   14877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 11:15:45.122012   14877 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:15:45.129423   14877 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:15:45.129435   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:45.173306   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:46.061629   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:46.181208   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:46.227664   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:46.274346   14877 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:15:46.274404   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:46.783969   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:47.284127   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:47.783755   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:47.793675   14877 api_server.go:71] duration metric: took 1.519303202s to wait for apiserver process to appear ...
	I0602 11:15:47.793698   14877 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:15:47.793711   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:50.014271   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 11:15:50.014296   14877 api_server.go:102] status: https://127.0.0.1:53985/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 11:15:50.516427   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:50.524747   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:15:50.524768   14877 api_server.go:102] status: https://127.0.0.1:53985/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:15:51.014458   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:51.020688   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:15:51.020708   14877 api_server.go:102] status: https://127.0.0.1:53985/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:15:51.514427   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:51.520036   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 200:
	ok
	I0602 11:15:51.527363   14877 api_server.go:140] control plane version: v1.23.6
	I0602 11:15:51.527374   14877 api_server.go:130] duration metric: took 3.73360754s to wait for apiserver health ...
	I0602 11:15:51.527381   14877 cni.go:95] Creating CNI manager for ""
	I0602 11:15:51.527386   14877 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:15:51.527396   14877 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:15:51.534273   14877 system_pods.go:59] 9 kube-system pods found
	I0602 11:15:51.534290   14877 system_pods.go:61] "coredns-64897985d-ckpbd" [f940716f-dc7a-4f33-a9e3-f89b1bbf3a7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 11:15:51.534295   14877 system_pods.go:61] "coredns-64897985d-dk92c" [d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 11:15:51.534299   14877 system_pods.go:61] "etcd-newest-cni-20220602111446-2113" [45343802-230f-4002-83ee-3028731601ed] Running
	I0602 11:15:51.534304   14877 system_pods.go:61] "kube-apiserver-newest-cni-20220602111446-2113" [e45d4f13-ffeb-448d-a62c-1535c7511193] Running
	I0602 11:15:51.534307   14877 system_pods.go:61] "kube-controller-manager-newest-cni-20220602111446-2113" [603546ac-2b33-4b29-a2d3-efcaff1925e6] Running
	I0602 11:15:51.534313   14877 system_pods.go:61] "kube-proxy-5sjvd" [91df91a9-1e57-4106-a94a-dc45614445f1] Running
	I0602 11:15:51.534318   14877 system_pods.go:61] "kube-scheduler-newest-cni-20220602111446-2113" [14624cc4-0799-4a60-a5b9-f158f628b2be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 11:15:51.534322   14877 system_pods.go:61] "metrics-server-b955d9d8-2jrzg" [91ec99de-3cb5-41b9-b2a1-954f97a3c052] Pending
	I0602 11:15:51.534327   14877 system_pods.go:61] "storage-provisioner" [5ca87ae3-29fb-44fb-aaf4-0a375381b9fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:15:51.534331   14877 system_pods.go:74] duration metric: took 6.930904ms to wait for pod list to return data ...
	I0602 11:15:51.534336   14877 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:15:51.537064   14877 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:15:51.537078   14877 node_conditions.go:123] node cpu capacity is 6
	I0602 11:15:51.537090   14877 node_conditions.go:105] duration metric: took 2.749266ms to run NodePressure ...
	I0602 11:15:51.537102   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:15:51.759154   14877 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 11:15:51.767481   14877 ops.go:34] apiserver oom_adj: -16
	I0602 11:15:51.767493   14877 kubeadm.go:630] restartCluster took 9.883428446s
	I0602 11:15:51.767500   14877 kubeadm.go:397] StartCluster complete in 9.920581795s
	I0602 11:15:51.767516   14877 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:15:51.767607   14877 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:15:51.768233   14877 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:15:51.771735   14877 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220602111446-2113" rescaled to 1
	I0602 11:15:51.771794   14877 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 11:15:51.771823   14877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 11:15:51.771832   14877 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 11:15:51.771913   14877 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220602111446-2113"
	I0602 11:15:51.816358   14877 out.go:177] * Verifying Kubernetes components...
	I0602 11:15:51.771929   14877 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220602111446-2113"
	I0602 11:15:51.771941   14877 addons.go:65] Setting dashboard=true in profile "newest-cni-20220602111446-2113"
	I0602 11:15:51.837567   14877 addons.go:153] Setting addon dashboard=true in "newest-cni-20220602111446-2113"
	W0602 11:15:51.837582   14877 addons.go:165] addon dashboard should already be in state true
	I0602 11:15:51.771966   14877 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220602111446-2113"
	I0602 11:15:51.772079   14877 config.go:178] Loaded profile config "newest-cni-20220602111446-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:15:51.837631   14877 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220602111446-2113"
	I0602 11:15:51.837628   14877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:15:51.837654   14877 host.go:66] Checking if "newest-cni-20220602111446-2113" exists ...
	I0602 11:15:51.816386   14877 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220602111446-2113"
	W0602 11:15:51.837707   14877 addons.go:165] addon storage-provisioner should already be in state true
	I0602 11:15:51.816393   14877 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220602111446-2113"
	I0602 11:15:51.837774   14877 host.go:66] Checking if "newest-cni-20220602111446-2113" exists ...
	W0602 11:15:51.837780   14877 addons.go:165] addon metrics-server should already be in state true
	I0602 11:15:51.837889   14877 host.go:66] Checking if "newest-cni-20220602111446-2113" exists ...
	I0602 11:15:51.838181   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:51.840623   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:51.840900   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:51.842643   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:51.955694   14877 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0602 11:15:51.955752   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:51.969552   14877 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 11:15:52.005992   14877 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0602 11:15:52.043252   14877 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 11:15:52.080422   14877 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:15:52.154281   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 11:15:52.212028   14877 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 11:15:52.154328   14877 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 11:15:52.154398   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:52.156569   14877 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220602111446-2113"
	W0602 11:15:52.212060   14877 addons.go:165] addon default-storageclass should already be in state true
	I0602 11:15:52.212093   14877 host.go:66] Checking if "newest-cni-20220602111446-2113" exists ...
	I0602 11:15:52.212089   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 11:15:52.249422   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 11:15:52.249441   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 11:15:52.249519   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:52.249535   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:52.252007   14877 cli_runner.go:164] Run: docker container inspect newest-cni-20220602111446-2113 --format={{.State.Status}}
	I0602 11:15:52.257329   14877 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:15:52.257427   14877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:15:52.283298   14877 api_server.go:71] duration metric: took 511.464425ms to wait for apiserver process to appear ...
	I0602 11:15:52.283323   14877 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:15:52.283343   14877 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53985/healthz ...
	I0602 11:15:52.295031   14877 api_server.go:266] https://127.0.0.1:53985/healthz returned 200:
	ok
	I0602 11:15:52.297658   14877 api_server.go:140] control plane version: v1.23.6
	I0602 11:15:52.297677   14877 api_server.go:130] duration metric: took 14.346421ms to wait for apiserver health ...
	I0602 11:15:52.297686   14877 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:15:52.303691   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:52.306439   14877 system_pods.go:59] 9 kube-system pods found
	I0602 11:15:52.306469   14877 system_pods.go:61] "coredns-64897985d-ckpbd" [f940716f-dc7a-4f33-a9e3-f89b1bbf3a7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 11:15:52.306490   14877 system_pods.go:61] "coredns-64897985d-dk92c" [d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 11:15:52.306512   14877 system_pods.go:61] "etcd-newest-cni-20220602111446-2113" [45343802-230f-4002-83ee-3028731601ed] Running
	I0602 11:15:52.306525   14877 system_pods.go:61] "kube-apiserver-newest-cni-20220602111446-2113" [e45d4f13-ffeb-448d-a62c-1535c7511193] Running
	I0602 11:15:52.306532   14877 system_pods.go:61] "kube-controller-manager-newest-cni-20220602111446-2113" [603546ac-2b33-4b29-a2d3-efcaff1925e6] Running
	I0602 11:15:52.306541   14877 system_pods.go:61] "kube-proxy-5sjvd" [91df91a9-1e57-4106-a94a-dc45614445f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0602 11:15:52.306550   14877 system_pods.go:61] "kube-scheduler-newest-cni-20220602111446-2113" [14624cc4-0799-4a60-a5b9-f158f628b2be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 11:15:52.306559   14877 system_pods.go:61] "metrics-server-b955d9d8-2jrzg" [91ec99de-3cb5-41b9-b2a1-954f97a3c052] Pending
	I0602 11:15:52.306568   14877 system_pods.go:61] "storage-provisioner" [5ca87ae3-29fb-44fb-aaf4-0a375381b9fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:15:52.306577   14877 system_pods.go:74] duration metric: took 8.88473ms to wait for pod list to return data ...
	I0602 11:15:52.306586   14877 default_sa.go:34] waiting for default service account to be created ...
	I0602 11:15:52.310095   14877 default_sa.go:45] found service account: "default"
	I0602 11:15:52.310110   14877 default_sa.go:55] duration metric: took 3.519074ms for default service account to be created ...
	I0602 11:15:52.310121   14877 kubeadm.go:572] duration metric: took 538.29283ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0602 11:15:52.310147   14877 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:15:52.356332   14877 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:15:52.356351   14877 node_conditions.go:123] node cpu capacity is 6
	I0602 11:15:52.356386   14877 node_conditions.go:105] duration metric: took 46.217393ms to run NodePressure ...
	I0602 11:15:52.356401   14877 start.go:213] waiting for startup goroutines ...
	I0602 11:15:52.358408   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:52.359033   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:52.360925   14877 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 11:15:52.360941   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 11:15:52.361030   14877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602111446-2113
	I0602 11:15:52.441993   14877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53981 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/newest-cni-20220602111446-2113/id_rsa Username:docker}
	I0602 11:15:52.474316   14877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:15:52.479895   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 11:15:52.479910   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 11:15:52.480769   14877 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 11:15:52.480785   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 11:15:52.563865   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 11:15:52.563879   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 11:15:52.568118   14877 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 11:15:52.568135   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 11:15:52.580618   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 11:15:52.580636   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 11:15:52.593643   14877 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:15:52.593657   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 11:15:52.602278   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 11:15:52.602293   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 11:15:52.663623   14877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:15:52.666950   14877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 11:15:52.675063   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 11:15:52.675079   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 11:15:52.756663   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 11:15:52.756681   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 11:15:52.775832   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 11:15:52.775845   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 11:15:52.861949   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 11:15:52.861964   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 11:15:52.880054   14877 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:15:52.880069   14877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 11:15:52.899051   14877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:15:53.757458   14877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.283090034s)
	I0602 11:15:53.758436   14877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.09477051s)
	I0602 11:15:53.758457   14877 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220602111446-2113"
	I0602 11:15:53.758478   14877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.091493481s)
	I0602 11:15:53.900139   14877 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0602 11:15:53.974108   14877 addons.go:417] enableAddons completed in 2.202246908s
	I0602 11:15:54.006631   14877 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 11:15:54.028219   14877 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220602111446-2113" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:15:38 UTC, end at Thu 2022-06-02 18:16:39 UTC. --
	Jun 02 18:15:54 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:54.296176630Z" level=info msg="ignoring event" container=186057c849deea2576ab6a8eeb6a66e0efc4a5309d5d167dc397552ea5c63840 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:54 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:54.311054215Z" level=info msg="ignoring event" container=0fa44147f5efe241c0cb4a601c290677ab6e6cd3f7543856340b206e16904bea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:55 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:55.146073311Z" level=info msg="ignoring event" container=0187171882e1ded642a15b4144559bc4cd4674d1dd100bae0c92d848090c98e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:15:55 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:15:55.164441055Z" level=info msg="ignoring event" container=4817f21a87a85365e3237ef698b41f9dacd1d3d691a4fb4d4dcb7ed97fbec9a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:32 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:32.220841361Z" level=info msg="ignoring event" container=6b5b2ef3edb984fc985ac33bcbd931f1b4ceb19e509f425f91c891ef0bce2790 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:32 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:32.890511627Z" level=info msg="ignoring event" container=72721ca1e7a3a70b36a8777563e67443ef776ff7e1502640f4a99b82f5fdf6cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:32 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:32.890563831Z" level=info msg="ignoring event" container=61c41e8f41aeabadbe1b71f2d93d6df618153fe85690f2419e19e3c3df49860a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:33.104925040Z" level=info msg="ignoring event" container=4d136898217f07820ca2031a05c662d3a1585dd2538cef7720277fd83e08e313 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:33 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:33.340654164Z" level=info msg="ignoring event" container=44af9b1deddc395359700f7644fac306d38bf2d05c8d752ba89fb6fd19c5023d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:34 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:34.702264854Z" level=info msg="ignoring event" container=3f277aad90a880dd8754e5e8bc6a1f80d79346ac199ba7b5f373e66a15163934 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:34 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:34.704038105Z" level=info msg="ignoring event" container=79c693686bf2173d13a883f3bbdbbc7686ea0f169bd1e8942ed401ac1c7cc983 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:34 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:34.704064026Z" level=info msg="ignoring event" container=6e5c81ebe1e09e939012be8587c36320e0c2535d771c593db6e8e78033df961d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:34 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:34.712556882Z" level=info msg="ignoring event" container=c57ab0c7a349ad85f3283f0f33f08345fa4604e58d7a595257deb36f1a66d7a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:35 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:35.564482094Z" level=info msg="ignoring event" container=1e7e0c4319205bd995711889b015759df505c46276c8f87cd9058d46cf695ea6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:35 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:35.633316941Z" level=info msg="ignoring event" container=056cdb065f3d304e133905f9915ab755c948dbb352b7c0303ef03527b86ca18d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:35 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:35.635229476Z" level=info msg="ignoring event" container=5490ae1e4a56deb4f97b7572d82cb7d558bf63724714952bfc152b561d276b96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:35 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:35.638900549Z" level=info msg="ignoring event" container=9d6b43281fc0d59a67e6ac10a13f0cb04c79c60fabc4e0bd0d466c0e67c1717d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:37 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:37.230495816Z" level=info msg="ignoring event" container=d16f04f5cfb161694b61ed939141dd846d7aaddb0b39eb8399af03c2e85248bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:38 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:38.262685555Z" level=info msg="ignoring event" container=1378b2b5c5a6f8adc61451bfde6ae7765aa44c13f64c4299814762ea8317ae22 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:38 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:38.334795319Z" level=info msg="ignoring event" container=df070f64f8c4b2d44605277761a4d804ea6adb6d4aff9da8804c5897d00104d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:38 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:38.342230645Z" level=info msg="ignoring event" container=3f0c3be8d24f6e177fa9fa03b6beb8e277ac142ae81e28e38daa4e26741fd213 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:39.175664906Z" level=info msg="ignoring event" container=1487a698489ca7d0d4839a65be3eacffdb94fe997948769c78302f226278c6ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:39.268827339Z" level=info msg="ignoring event" container=1fa5d2d198cfb743a75fe8417911c6e21a88ba31cdfba3b86394649455bb94fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:39.270677053Z" level=info msg="ignoring event" container=98dcd32bee8640a6df4537c457df3a2e1e6a1de7156e88ab6c0affa26493ddf2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:16:39 newest-cni-20220602111446-2113 dockerd[131]: time="2022-06-02T18:16:39.341250950Z" level=info msg="ignoring event" container=29d50d77f12fe47028f87ae766979edbba8073a028d62d1a09af5a44177b080d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	6c76a8787c875       6e38f40d628db       48 seconds ago       Running             storage-provisioner       1                   d865c91d9d622
	42ba4c375d74a       4c03754524064       48 seconds ago       Running             kube-proxy                1                   d87e70b57510e
	c07f121101bc9       df7b72818ad2e       52 seconds ago       Running             kube-controller-manager   1                   8046b20b0d10a
	d2d625f83230a       595f327f224a4       52 seconds ago       Running             kube-scheduler            1                   f4bf16b662e3f
	ccb03438c98a2       25f8c7f3da61c       52 seconds ago       Running             etcd                      1                   942942998bd72
	4242a8ad99d05       8fa62c12256df       52 seconds ago       Running             kube-apiserver            1                   3d94a3f3b5cc2
	8fa6bceffcfa2       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   617be3b7501b0
	f293ab9d6a43f       4c03754524064       About a minute ago   Exited              kube-proxy                0                   1d08574018804
	a65395e30f8ed       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   36890e67d5c54
	03154e30d8a22       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   5653f77280da1
	6c8b9d467621a       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   ff8ed0ab86325
	ffbfaa032774b       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   47adf6bc99493
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220602111446-2113
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220602111446-2113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=newest-cni-20220602111446-2113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T11_15_08_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 18:15:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220602111446-2113
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 18:16:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 18:16:29 +0000   Thu, 02 Jun 2022 18:15:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 18:16:29 +0000   Thu, 02 Jun 2022 18:15:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 18:16:29 +0000   Thu, 02 Jun 2022 18:15:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 18:16:29 +0000   Thu, 02 Jun 2022 18:16:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220602111446-2113
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                b8a9c196-0f67-4278-a87a-69d0d4fb8109
	  Boot ID:                    a475dd08-72ba-4c6d-89c1-75a58adc3783
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-dk92c                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     79s
	  kube-system                 etcd-newest-cni-20220602111446-2113                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         91s
	  kube-system                 kube-apiserver-newest-cni-20220602111446-2113             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-controller-manager-newest-cni-20220602111446-2113    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-5sjvd                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-newest-cni-20220602111446-2113             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 metrics-server-b955d9d8-2jrzg                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-xvrdt                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-c78zj                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 78s                kube-proxy  
	  Normal  Starting                 49s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    98s (x5 over 98s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  98s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     98s (x4 over 98s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  98s (x5 over 98s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientMemory
	  Normal  Starting                 92s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  92s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                81s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeReady
	  Normal  Starting                 54s                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    54s (x7 over 54s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x7 over 54s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  54s (x7 over 54s)  kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             11s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  11s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                11s                kubelet     Node newest-cni-20220602111446-2113 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [a65395e30f8e] <==
	* {"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220602111446-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:03.525Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:15:03.526Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:15:03.526Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T18:15:03.526Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:15:03.529Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:03.529Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:03.529Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:15:03.530Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T18:15:24.648Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-02T18:15:24.648Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220602111446-2113","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/06/02 18:15:24 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 18:15:24 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-02T18:15:24.694Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-06-02T18:15:24.696Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:15:24.698Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:15:24.698Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220602111446-2113","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> etcd [ccb03438c98a] <==
	* {"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-06-02T18:15:48.518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-02T18:15:48.519Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220602111446-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:15:48.519Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:15:48.519Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:15:48.520Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:15:48.520Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T18:15:48.520Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:15:48.520Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T18:15:52.172Z","caller":"traceutil/trace.go:171","msg":"trace[1883045834] linearizableReadLoop","detail":"{readStateIndex:553; appliedIndex:553; }","duration":"194.706539ms","start":"2022-06-02T18:15:51.978Z","end":"2022-06-02T18:15:52.172Z","steps":["trace[1883045834] 'read index received'  (duration: 194.701511ms)","trace[1883045834] 'applied index is now lower than readState.Index'  (duration: 4.486µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T18:15:52.173Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"194.950974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:1 size:994"}
	{"level":"info","ts":"2022-06-02T18:15:52.173Z","caller":"traceutil/trace.go:171","msg":"trace[1044261052] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:522; }","duration":"195.085941ms","start":"2022-06-02T18:15:51.978Z","end":"2022-06-02T18:15:52.173Z","steps":["trace[1044261052] 'agreement among raft nodes before linearized reading'  (duration: 194.916033ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T18:15:52.270Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"223.357707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-02T18:15:52.270Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"291.370392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-20220602111446-2113\" ","response":"range_response_count:1 size:7385"}
	{"level":"info","ts":"2022-06-02T18:15:52.270Z","caller":"traceutil/trace.go:171","msg":"trace[439630751] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-20220602111446-2113; range_end:; response_count:1; response_revision:523; }","duration":"291.436358ms","start":"2022-06-02T18:15:51.978Z","end":"2022-06-02T18:15:52.270Z","steps":["trace[439630751] 'agreement among raft nodes before linearized reading'  (duration: 291.315533ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T18:15:52.270Z","caller":"traceutil/trace.go:171","msg":"trace[418227208] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:523; }","duration":"223.425321ms","start":"2022-06-02T18:15:52.046Z","end":"2022-06-02T18:15:52.270Z","steps":["trace[418227208] 'agreement among raft nodes before linearized reading'  (duration: 223.31959ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T18:16:33.554Z","caller":"traceutil/trace.go:171","msg":"trace[1087117457] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:678; }","duration":"102.289252ms","start":"2022-06-02T18:16:33.451Z","end":"2022-06-02T18:16:33.554Z","steps":["trace[1087117457] 'read index received'  (duration: 102.282684ms)","trace[1087117457] 'applied index is now lower than readState.Index'  (duration: 5.271µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T18:16:33.554Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.652264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-newest-cni-20220602111446-2113\" ","response":"range_response_count:1 size:7314"}
	{"level":"info","ts":"2022-06-02T18:16:33.554Z","caller":"traceutil/trace.go:171","msg":"trace[715785955] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-newest-cni-20220602111446-2113; range_end:; response_count:1; response_revision:634; }","duration":"102.698906ms","start":"2022-06-02T18:16:33.451Z","end":"2022-06-02T18:16:33.554Z","steps":["trace[715785955] 'agreement among raft nodes before linearized reading'  (duration: 102.566552ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T18:16:37.601Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"109.180226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj.16f4e07d778a0644\" ","response":"range_response_count:1 size:770"}
	{"level":"info","ts":"2022-06-02T18:16:37.601Z","caller":"traceutil/trace.go:171","msg":"trace[2111093722] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj.16f4e07d778a0644; range_end:; response_count:1; response_revision:662; }","duration":"109.257969ms","start":"2022-06-02T18:16:37.492Z","end":"2022-06-02T18:16:37.601Z","steps":["trace[2111093722] 'agreement among raft nodes before linearized reading'  (duration: 24.923886ms)","trace[2111093722] 'range keys from in-memory index tree'  (duration: 84.225647ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  18:16:41 up  1:04,  0 users,  load average: 1.75, 1.25, 1.13
	Linux newest-cni-20220602111446-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [4242a8ad99d0] <==
	* I0602 18:15:50.115261       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0602 18:15:50.115437       1 cache.go:39] Caches are synced for autoregister controller
	I0602 18:15:50.117613       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0602 18:15:50.128575       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0602 18:15:50.128634       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0602 18:15:50.138633       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0602 18:15:51.014538       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0602 18:15:51.014554       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 18:15:51.020507       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0602 18:15:51.187003       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:15:51.187105       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:15:51.187113       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0602 18:15:51.273981       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 18:15:51.690701       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 18:15:51.718939       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 18:15:51.744154       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 18:15:51.756375       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 18:15:51.761139       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 18:15:53.695482       1 controller.go:611] quota admission added evaluator for: namespaces
	I0602 18:15:53.876714       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.171.28]
	I0602 18:15:53.886162       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.98.177]
	I0602 18:16:28.662601       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 18:16:29.436054       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 18:16:29.751850       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [6c8b9d467621] <==
	* W0602 18:15:34.084965       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.102387       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.119137       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.122532       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.163605       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.175540       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.201099       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.207641       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.207996       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.216898       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.284171       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.364660       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.367232       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.387876       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.520083       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.531265       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.533079       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.533112       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.539091       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.541722       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.567920       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.592062       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.615766       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.640781       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 18:15:34.643915       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [c07f121101bc] <==
	* I0602 18:16:29.447981       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 18:16:29.448059       1 shared_informer.go:247] Caches are synced for job 
	I0602 18:16:29.448290       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0602 18:16:29.448364       1 shared_informer.go:247] Caches are synced for disruption 
	I0602 18:16:29.448373       1 disruption.go:371] Sending events to api server.
	I0602 18:16:29.456460       1 shared_informer.go:247] Caches are synced for namespace 
	I0602 18:16:29.535066       1 shared_informer.go:247] Caches are synced for service account 
	I0602 18:16:29.535083       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0602 18:16:29.539615       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 18:16:29.561468       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 18:16:29.571101       1 shared_informer.go:247] Caches are synced for taint 
	I0602 18:16:29.571162       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0602 18:16:29.571182       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0602 18:16:29.571204       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220602111446-2113. Assuming now as a timestamp.
	I0602 18:16:29.571222       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0602 18:16:29.571505       1 event.go:294] "Event occurred" object="newest-cni-20220602111446-2113" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220602111446-2113 event: Registered Node newest-cni-20220602111446-2113 in Controller"
	I0602 18:16:29.573957       1 shared_informer.go:247] Caches are synced for attach detach 
	I0602 18:16:29.659755       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0602 18:16:29.755875       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	I0602 18:16:29.755964       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 18:16:29.903736       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-c78zj"
	I0602 18:16:29.904921       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-xvrdt"
	I0602 18:16:30.059480       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 18:16:30.063806       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 18:16:30.063837       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [ffbfaa032774] <==
	* I0602 18:15:20.885949       1 shared_informer.go:247] Caches are synced for job 
	I0602 18:15:20.889811       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0602 18:15:20.891118       1 shared_informer.go:247] Caches are synced for namespace 
	I0602 18:15:20.894446       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0602 18:15:20.900107       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5sjvd"
	I0602 18:15:20.941193       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0602 18:15:20.949550       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0602 18:15:20.954060       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 18:15:21.082701       1 shared_informer.go:247] Caches are synced for disruption 
	I0602 18:15:21.082758       1 disruption.go:371] Sending events to api server.
	I0602 18:15:21.095312       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 18:15:21.096838       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 18:15:21.135721       1 shared_informer.go:247] Caches are synced for stateful set 
	I0602 18:15:21.244824       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 18:15:21.515067       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 18:15:21.533842       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 18:15:21.534040       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 18:15:21.697906       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-ckpbd"
	I0602 18:15:21.702597       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-dk92c"
	I0602 18:15:21.702652       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 18:15:21.717257       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-ckpbd"
	I0602 18:15:23.993729       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 18:15:23.995505       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0602 18:15:24.002244       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0602 18:15:24.009766       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-2jrzg"
	
	* 
	* ==> kube-proxy [42ba4c375d74] <==
	* I0602 18:15:51.252640       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:15:51.252695       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:15:51.252721       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:15:51.268752       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:15:51.268795       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:15:51.268803       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:15:51.268818       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:15:51.269424       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:15:51.270014       1 config.go:317] "Starting service config controller"
	I0602 18:15:51.270054       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:15:51.270067       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:15:51.270070       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:15:51.370278       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 18:15:51.370305       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [f293ab9d6a43] <==
	* I0602 18:15:22.069746       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:15:22.069835       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:15:22.069880       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:15:22.107667       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:15:22.107702       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:15:22.107708       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:15:22.107717       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:15:22.108076       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:15:22.108947       1 config.go:317] "Starting service config controller"
	I0602 18:15:22.109009       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:15:22.109076       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:15:22.109082       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:15:22.209084       1 shared_informer.go:247] Caches are synced for service config 
	I0602 18:15:22.209178       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [03154e30d8a2] <==
	* E0602 18:15:05.802474       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0602 18:15:05.802442       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:15:05.802525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:15:05.802607       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:15:05.802565       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 18:15:05.802636       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:15:05.802646       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 18:15:05.802687       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 18:15:05.802715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 18:15:05.802877       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 18:15:05.802917       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 18:15:05.804277       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 18:15:05.804293       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 18:15:06.648664       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0602 18:15:06.648717       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0602 18:15:06.650846       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 18:15:06.650893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 18:15:06.719512       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:15:06.719548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:15:06.827391       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 18:15:06.827435       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0602 18:15:07.334530       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0602 18:15:24.711628       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 18:15:24.711797       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0602 18:15:24.715962       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [d2d625f83230] <==
	* W0602 18:15:47.505147       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0602 18:15:48.027837       1 serving.go:348] Generated self-signed cert in-memory
	W0602 18:15:50.045173       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0602 18:15:50.045555       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 18:15:50.045814       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0602 18:15:50.046899       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0602 18:15:50.054889       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0602 18:15:50.077456       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0602 18:15:50.077467       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 18:15:50.077535       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0602 18:15:50.078815       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0602 18:15:50.178818       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:15:38 UTC, end at Thu 2022-06-02 18:16:43 UTC. --
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:42.039171    3766 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2580885b030eb2dc5ae5cac3543139648530a85981237e8c36252f6a746c085d"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:42.040740    3766 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"2580885b030eb2dc5ae5cac3543139648530a85981237e8c36252f6a746c085d\""
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.571645    3766 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-dk92c" podSandboxID={Type:docker ID:f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c} podNetnsPath="/proc/8928/ns/net" networkType="bridge" networkName="crio"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.598098    3766 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.43 -j CNI-df4f9d9d9caa17620ad274f5 -m comment --comment name: \"crio\" id: \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-df4f9d9d9caa17620ad274f5':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-dk92c" podSandboxID={Type:docker ID:f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c} podNetnsPath="/proc/8928/ns/net" networkType="bridge" networkName="crio"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.650446    3766 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/metrics-server-b955d9d8-2jrzg" podSandboxID={Type:docker ID:bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d} podNetnsPath="/proc/8940/ns/net" networkType="bridge" networkName="crio"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.654568    3766 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj" podSandboxID={Type:docker ID:f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613} podNetnsPath="/proc/8958/ns/net" networkType="bridge" networkName="crio"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.683603    3766 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.45 -j CNI-5f1463d840a0d0b9cd0833f5 -m comment --comment name: \"crio\" id: \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-5f1463d840a0d0b9cd0833f5':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/metrics-server-b955d9d8-2jrzg" podSandboxID={Type:docker ID:bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d} podNetnsPath="/proc/8940/ns/net" networkType="bridge" networkName="crio"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.688770    3766 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.44 -j CNI-246f7c87197a607f41f01282 -m comment --comment name: \"crio\" id: \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-246f7c87197a607f41f01282':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj" podSandboxID={Type:docker ID:f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613} podNetnsPath="/proc/8958/ns/net" networkType="bridge" networkName="crio"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.866808    3766 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to set up pod \"coredns-64897985d-dk92c_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to teardown pod \"coredns-64897985d-dk92c_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.43 -j CNI-df4f9d9d9caa17620ad274f5 -m comment --comment name: \"crio\" id: \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" --wait]: exit status 2: iptables v1.8.4 (legacy):
Couldn't load target `CNI-df4f9d9d9caa17620ad274f5':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.866873    3766 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to set up pod \"coredns-64897985d-dk92c_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to teardown pod \"coredns-64897985d-dk92c_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.43 -j CNI-df4f9d9d9caa17620ad274f5 -m comment --comment name: \"crio\" id: \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-df4f9d9d9caa17620ad274f5':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-dk92c"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.866896    3766 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to set up pod \"coredns-64897985d-dk92c_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" network for pod \"coredns-64897985d-dk92c\": networkPlugin cni failed to teardown pod \"coredns-64897985d-dk92c_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.43 -j CNI-df4f9d9d9caa17620ad274f5 -m comment --comment name: \"crio\" id: \"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-df4f9d9d9caa17620ad274f5':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-dk92c"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.866942    3766 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-dk92c_kube-system(d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-dk92c_kube-system(d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\\\" network for pod \\\"coredns-64897985d-dk92c\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-dk92c_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\\\" network for pod \\\"coredns-64897985d-dk92c\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-dk92c_kube-syste
m\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.43 -j CNI-df4f9d9d9caa17620ad274f5 -m comment --comment name: \\\"crio\\\" id: \\\"f08375e63336b7458fdb18b1c65665ce59f5a757921bf7794e32d345fdec635c\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-df4f9d9d9caa17620ad274f5':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-dk92c" podUID=d9f7db33-9fc3-4885-8d25-6ab42e9f8b8f
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.868265    3766 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.45 -j CNI-5f1463d840a0d0b9cd0833f5 -m comment --comment name: \"crio\" id: \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" --wait]: exit status 2: ip
tables v1.8.4 (legacy): Couldn't load target `CNI-5f1463d840a0d0b9cd0833f5':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.868318    3766 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.45 -j CNI-5f1463d840a0d0b9cd0833f5 -m comment --comment name: \"crio\" id: \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-5f1463d840a0d0b9cd0833f5':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-2jrzg"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.868341    3766 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" network for pod \"metrics-server-b955d9d8-2jrzg\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-2jrzg_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.45 -j CNI-5f1463d840a0d0b9cd0833f5 -m comment --comment name: \"crio\" id: \"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-5f1463d840a0d0b9cd0833f5':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-2jrzg"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.868380    3766 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-b955d9d8-2jrzg_kube-system(91ec99de-3cb5-41b9-b2a1-954f97a3c052)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-b955d9d8-2jrzg_kube-system(91ec99de-3cb5-41b9-b2a1-954f97a3c052)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\\\" network for pod \\\"metrics-server-b955d9d8-2jrzg\\\": networkPlugin cni failed to set up pod \\\"metrics-server-b955d9d8-2jrzg_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\\\" network for pod \\\"metrics-server-b955d9d8-2jrzg\\\": networkPlugin cni failed to teardown pod \\\"metr
ics-server-b955d9d8-2jrzg_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.45 -j CNI-5f1463d840a0d0b9cd0833f5 -m comment --comment name: \\\"crio\\\" id: \\\"bf9c6e3b2552467cf41f7974f032098bc6981a4c8bc0f1de521303a808f7f41d\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-5f1463d840a0d0b9cd0833f5':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-b955d9d8-2jrzg" podUID=91ec99de-3cb5-41b9-b2a1-954f97a3c052
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.872548    3766 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.44 -j CNI-246f7c87197a607f41f01282 -m comment --comment name: \"crio\" id: \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b
2875be2280e70d613\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-246f7c87197a607f41f01282':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.872610    3766 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.44 -j CNI-246f7c87197a607f41f01282 -m comment --comment name: \"crio\" id: \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875b
e2280e70d613\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-246f7c87197a607f41f01282':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.872637    3766 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\" network for pod \"kubernetes-dashboard-cd7c84bfc-c78zj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.44 -j CNI-246f7c87197a607f41f01282 -m comment --comment name: \"crio\" id: \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875b
e2280e70d613\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-246f7c87197a607f41f01282':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj"
	Jun 02 18:16:42 newest-cni-20220602111446-2113 kubelet[3766]: E0602 18:16:42.872703    3766 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard(2c625cf2-f4a4-4638-8595-d6f3b0abeb10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard(2c625cf2-f4a4-4638-8595-d6f3b0abeb10)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\\\" network for pod \\\"kubernetes-dashboard-cd7c84bfc-c78zj\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\\\" network for pod \\\"kubernetes-dashboard-cd7c84bf
c-c78zj\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.44 -j CNI-246f7c87197a607f41f01282 -m comment --comment name: \\\"crio\\\" id: \\\"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-246f7c87197a607f41f01282':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-c78zj" podUID=2c625cf2-f4a4-4638-8595-d6f3b0abeb10
	Jun 02 18:16:43 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:43.049418    3766 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"dashboard-metrics-scraper-56974995fc-xvrdt_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fb8c6a2bb1cf34c97c5ef6b5eb0136e441b27fc6b635c3e8daacfcd2f102ab7a\""
	Jun 02 18:16:43 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:43.054167    3766 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fb8c6a2bb1cf34c97c5ef6b5eb0136e441b27fc6b635c3e8daacfcd2f102ab7a"
	Jun 02 18:16:43 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:43.055616    3766 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"fb8c6a2bb1cf34c97c5ef6b5eb0136e441b27fc6b635c3e8daacfcd2f102ab7a\""
	Jun 02 18:16:43 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:43.058056    3766 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"kubernetes-dashboard-cd7c84bfc-c78zj_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"f63a89d128d97c9e0e6d25737148d5ee7293d7930c32b3b2875be2280e70d613\""
	Jun 02 18:16:43 newest-cni-20220602111446-2113 kubelet[3766]: I0602 18:16:43.067041    3766 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b3e93ce877ee92af09a630f6484ee7d039d42916aab295fd60a6840d2e62ede4"
	
	* 
	* ==> storage-provisioner [6c76a8787c87] <==
	* I0602 18:15:52.501288       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:15:52.511315       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:15:52.511345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:16:28.665674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:16:28.665826       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220602111446-2113_c8f6321e-b7fd-4745-8b00-2079c78117fe!
	I0602 18:16:28.665911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00c1aff1-f963-41db-9864-6fe44e16f73a", APIVersion:"v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220602111446-2113_c8f6321e-b7fd-4745-8b00-2079c78117fe became leader
	I0602 18:16:28.766730       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220602111446-2113_c8f6321e-b7fd-4745-8b00-2079c78117fe!
	
	* 
	* ==> storage-provisioner [8fa6bceffcfa] <==
	* I0602 18:15:24.155902       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:15:24.163106       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:15:24.163175       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:15:24.200071       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00c1aff1-f963-41db-9864-6fe44e16f73a", APIVersion:"v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220602111446-2113_491d83a5-d6e1-4929-80ef-65ea73f46f26 became leader
	I0602 18:15:24.200830       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:15:24.201051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220602111446-2113_491d83a5-d6e1-4929-80ef-65ea73f46f26!
	I0602 18:15:24.301651       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220602111446-2113_491d83a5-d6e1-4929-80ef-65ea73f46f26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220602111446-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-dk92c metrics-server-b955d9d8-2jrzg dashboard-metrics-scraper-56974995fc-xvrdt kubernetes-dashboard-cd7c84bfc-c78zj
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220602111446-2113 describe pod coredns-64897985d-dk92c metrics-server-b955d9d8-2jrzg dashboard-metrics-scraper-56974995fc-xvrdt kubernetes-dashboard-cd7c84bfc-c78zj
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220602111446-2113 describe pod coredns-64897985d-dk92c metrics-server-b955d9d8-2jrzg dashboard-metrics-scraper-56974995fc-xvrdt kubernetes-dashboard-cd7c84bfc-c78zj: exit status 1 (257.038486ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-dk92c" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-2jrzg" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-xvrdt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-cd7c84bfc-c78zj" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220602111446-2113 describe pod coredns-64897985d-dk92c metrics-server-b955d9d8-2jrzg dashboard-metrics-scraper-56974995fc-xvrdt kubernetes-dashboard-cd7c84bfc-c78zj: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (49.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:22:52.714848    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
E0602 11:22:54.849429    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:23:01.397339    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:23:20.407636    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:23:54.126776    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:24:03.549867    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:24:12.707082    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:24:20.075163    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 11:24:23.992349    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:24:29.135054    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:25:11.579179    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:25:35.762934    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:25:52.286385    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:26:34.685461    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:26:54.200702    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:27:52.719091    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:27:54.852805    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:28:01.404457    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:28:54.131709    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52181/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0602 11:29:03.556647    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0602 11:29:12.712274    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0602 11:29:20.080984    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0602 11:29:23.995653    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0602 11:29:24.472948    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0602 11:29:29.140258    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0602 11:30:11.584279    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0602 11:30:52.292688    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:289: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 2 (487.395742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: status error: exit status 2 (may be ok)
start_stop_delete_test.go:289: "old-k8s-version-20220602105906-2113" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220602105906-2113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220602105906-2113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.594µs)
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220602105906-2113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220602105906-2113
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220602105906-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07",
	        "Created": "2022-06-02T17:59:12.760386506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:04:51.572935922Z",
	            "FinishedAt": "2022-06-02T18:04:48.684748032Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hostname",
	        "HostsPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/hosts",
	        "LogPath": "/var/lib/docker/containers/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07/61b85e98188bd475e4d2e79f960974b095be22ba4ab7efe2c2cea90a7dd1df07-json.log",
	        "Name": "/old-k8s-version-20220602105906-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220602105906-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220602105906-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44dd38c6e10b36e607d0e8384d5659a79a7f7719dd979245fc08b6c0388399ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220602105906-2113",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220602105906-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220602105906-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220602105906-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77d71d4d8d15408927c38bc69753733fb245f90b6786c7b56828647b3b4389d6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52179"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52181"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/77d71d4d8d15",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220602105906-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61b85e98188b",
	                        "old-k8s-version-20220602105906-2113"
	                    ],
	                    "NetworkID": "fefb74a76593392c8406a972f20a5745c2403bb46ee6809bd1a18584d4cbeee4",
	                    "EndpointID": "3cd2312efe3d60be38aeb6608533eff057e701e91a3e65f1ab1e73ec94a72df1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 2 (425.168574ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220602105906-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220602105906-2113 logs -n 25: (3.52097849s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |               Profile               |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                     |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                     |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                     |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                     |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                     |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                     |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                     |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                     |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                     |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                     |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                     |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                     |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                     |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                     |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                     |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                     |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                     |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                     |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                     |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                     |         |                |                     |                     |
	| logs    | newest-cni-20220602111446-2113                             | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | logs -n 25                                                 |                                     |         |                |                     |                     |
	| logs    | newest-cni-20220602111446-2113                             | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | logs -n 25                                                 |                                     |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                     |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220602111446-2113      | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                     |         |                |                     |                     |
	| start   | -p                                                         | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                     |         |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                     |         |                |                     |                     |
	|         | --driver=docker                                            |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                     |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                     |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                     |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                     |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                     |         |                |                     |                     |
	| logs    | old-k8s-version-20220602105906-2113                        | old-k8s-version-20220602105906-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:22 PDT | 02 Jun 22 11:22 PDT |
	|         | logs -n 25                                                 |                                     |         |                |                     |                     |
	| start   | -p                                                         | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:23 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                     |         |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                     |         |                |                     |                     |
	|         | --driver=docker                                            |                                     |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                     |         |                |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:23 PDT | 02 Jun 22 11:23 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                     |         |                |                     |                     |
	| pause   | -p                                                         | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:23 PDT | 02 Jun 22 11:23 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                     |         |                |                     |                     |
	| unpause | -p                                                         | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:24 PDT | 02 Jun 22 11:24 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                     |         |                |                     |                     |
	| logs    | embed-certs-20220602111648-2113                            | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:24 PDT | 02 Jun 22 11:24 PDT |
	|         | logs -n 25                                                 |                                     |         |                |                     |                     |
	| logs    | embed-certs-20220602111648-2113                            | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:24 PDT | 02 Jun 22 11:24 PDT |
	|         | logs -n 25                                                 |                                     |         |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:24 PDT | 02 Jun 22 11:24 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220602111648-2113     | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:24 PDT | 02 Jun 22 11:24 PDT |
	|         | embed-certs-20220602111648-2113                            |                                     |         |                |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:17:54
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:17:54.298706   15352 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:17:54.298896   15352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:17:54.298901   15352 out.go:309] Setting ErrFile to fd 2...
	I0602 11:17:54.298905   15352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:17:54.299002   15352 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:17:54.299282   15352 out.go:303] Setting JSON to false
	I0602 11:17:54.314716   15352 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4643,"bootTime":1654189231,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:17:54.314829   15352 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:17:54.336522   15352 out.go:177] * [embed-certs-20220602111648-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:17:54.379858   15352 notify.go:193] Checking for updates...
	I0602 11:17:54.401338   15352 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:17:54.422430   15352 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:17:54.443822   15352 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:17:54.465706   15352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:17:54.487842   15352 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:17:54.510345   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:17:54.511006   15352 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:17:54.583879   15352 docker.go:137] docker version: linux-20.10.14
	I0602 11:17:54.584008   15352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:17:54.710496   15352 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:17:54.661726472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:17:54.732441   15352 out.go:177] * Using the docker driver based on existing profile
	I0602 11:17:54.754261   15352 start.go:284] selected driver: docker
	I0602 11:17:54.754294   15352 start.go:806] validating driver "docker" against &{Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:54.754438   15352 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:17:54.757822   15352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:17:54.886547   15352 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:17:54.836693909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:17:54.886708   15352 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:17:54.886725   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:17:54.886733   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:17:54.886755   15352 start_flags.go:306] config:
	{Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:54.930397   15352 out.go:177] * Starting control plane node embed-certs-20220602111648-2113 in cluster embed-certs-20220602111648-2113
	I0602 11:17:54.952534   15352 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:17:54.974462   15352 out.go:177] * Pulling base image ...
	I0602 11:17:55.016639   15352 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:17:55.016641   15352 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:17:55.016722   15352 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 11:17:55.016736   15352 cache.go:57] Caching tarball of preloaded images
	I0602 11:17:55.016927   15352 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:17:55.016959   15352 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 11:17:55.017969   15352 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/config.json ...
	I0602 11:17:55.082071   15352 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:17:55.082088   15352 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:17:55.082098   15352 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:17:55.082139   15352 start.go:352] acquiring machines lock for embed-certs-20220602111648-2113: {Name:mk14ff68897b305c2bdfb36f1ceaa58ce32379a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:17:55.082233   15352 start.go:356] acquired machines lock for "embed-certs-20220602111648-2113" in 73.195µs
	I0602 11:17:55.082254   15352 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:17:55.082263   15352 fix.go:55] fixHost starting: 
	I0602 11:17:55.082507   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:17:55.149317   15352 fix.go:103] recreateIfNeeded on embed-certs-20220602111648-2113: state=Stopped err=<nil>
	W0602 11:17:55.149352   15352 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:17:55.192959   15352 out.go:177] * Restarting existing docker container for "embed-certs-20220602111648-2113" ...
	I0602 11:17:55.214224   15352 cli_runner.go:164] Run: docker start embed-certs-20220602111648-2113
	I0602 11:17:55.579016   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:17:55.651976   15352 kic.go:416] container "embed-certs-20220602111648-2113" state is running.
	I0602 11:17:55.652516   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:55.726686   15352 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/config.json ...
	I0602 11:17:55.727067   15352 machine.go:88] provisioning docker machine ...
	I0602 11:17:55.727092   15352 ubuntu.go:169] provisioning hostname "embed-certs-20220602111648-2113"
	I0602 11:17:55.727154   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:55.800251   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:55.800475   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:55.800489   15352 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220602111648-2113 && echo "embed-certs-20220602111648-2113" | sudo tee /etc/hostname
	I0602 11:17:55.940753   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220602111648-2113
	
	I0602 11:17:55.940849   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.013703   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.013881   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.013895   15352 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220602111648-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220602111648-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220602111648-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:17:56.130458   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:17:56.130490   15352 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:17:56.130508   15352 ubuntu.go:177] setting up certificates
	I0602 11:17:56.130518   15352 provision.go:83] configureAuth start
	I0602 11:17:56.130590   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:56.202522   15352 provision.go:138] copyHostCerts
	I0602 11:17:56.202610   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:17:56.202620   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:17:56.202707   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:17:56.202956   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:17:56.202966   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:17:56.203024   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:17:56.203210   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:17:56.203230   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:17:56.203292   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:17:56.203402   15352 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220602111648-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220602111648-2113]
	I0602 11:17:56.290352   15352 provision.go:172] copyRemoteCerts
	I0602 11:17:56.290417   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:17:56.290462   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.363098   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:56.448844   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:17:56.468413   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:17:56.487167   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0602 11:17:56.504244   15352 provision.go:86] duration metric: configureAuth took 373.70854ms
	I0602 11:17:56.504257   15352 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:17:56.504400   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:17:56.504454   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.574726   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.574873   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.574883   15352 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:17:56.692552   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:17:56.692565   15352 ubuntu.go:71] root file system type: overlay
	I0602 11:17:56.692719   15352 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:17:56.692794   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.763208   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.763366   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.763424   15352 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:17:56.888442   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:17:56.888522   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.959173   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.959343   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.959378   15352 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:17:57.080070   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:17:57.080081   15352 machine.go:91] provisioned docker machine in 1.352983871s
	I0602 11:17:57.080092   15352 start.go:306] post-start starting for "embed-certs-20220602111648-2113" (driver="docker")
	I0602 11:17:57.080099   15352 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:17:57.080167   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:17:57.080224   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.150320   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.237169   15352 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:17:57.240932   15352 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:17:57.240947   15352 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:17:57.240960   15352 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:17:57.240965   15352 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:17:57.240973   15352 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:17:57.241075   15352 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:17:57.241205   15352 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:17:57.241347   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:17:57.249423   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:17:57.266686   15352 start.go:309] post-start completed in 186.579963ms
	I0602 11:17:57.266764   15352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:17:57.266809   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.337389   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.419423   15352 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:17:57.423756   15352 fix.go:57] fixHost completed within 2.341450978s
	I0602 11:17:57.423771   15352 start.go:81] releasing machines lock for "embed-certs-20220602111648-2113", held for 2.341488916s
	I0602 11:17:57.423846   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:57.493832   15352 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:17:57.493842   15352 ssh_runner.go:195] Run: systemctl --version
	I0602 11:17:57.493909   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.493898   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.571385   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.572948   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.784521   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:17:57.797372   15352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:17:57.806989   15352 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:17:57.807041   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:17:57.816005   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:17:57.829060   15352 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:17:57.898903   15352 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:17:57.967953   15352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:17:57.977779   15352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:17:58.050651   15352 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:17:58.060254   15352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:17:58.095467   15352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:17:58.172409   15352 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 11:17:58.172543   15352 cli_runner.go:164] Run: docker exec -t embed-certs-20220602111648-2113 dig +short host.docker.internal
	I0602 11:17:58.301503   15352 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:17:58.301604   15352 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:17:58.305905   15352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:17:58.316714   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:58.387831   15352 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:17:58.387911   15352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:17:58.416852   15352 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:17:58.416866   15352 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:17:58.416944   15352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:17:58.447690   15352 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:17:58.447713   15352 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:17:58.447820   15352 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:17:58.520455   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:17:58.520468   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:17:58.520483   15352 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:17:58.520502   15352 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220602111648-2113 NodeName:embed-certs-20220602111648-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:17:58.520613   15352 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220602111648-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:17:58.520681   15352 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220602111648-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 11:17:58.520742   15352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 11:17:58.528337   15352 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:17:58.528400   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:17:58.535248   15352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0602 11:17:58.547429   15352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:17:58.559653   15352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0602 11:17:58.572912   15352 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:17:58.576677   15352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:17:58.585837   15352 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113 for IP: 192.168.58.2
	I0602 11:17:58.585959   15352 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:17:58.586013   15352 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:17:58.586093   15352 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/client.key
	I0602 11:17:58.586153   15352 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.key.cee25041
	I0602 11:17:58.586215   15352 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.key
	I0602 11:17:58.586412   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:17:58.586453   15352 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:17:58.586477   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:17:58.586519   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:17:58.586551   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:17:58.586580   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:17:58.586639   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:17:58.587181   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:17:58.604132   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 11:17:58.620640   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:17:58.637561   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:17:58.654357   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:17:58.671422   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:17:58.687905   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:17:58.704559   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:17:58.721152   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:17:58.738095   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:17:58.754705   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:17:58.771067   15352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:17:58.783467   15352 ssh_runner.go:195] Run: openssl version
	I0602 11:17:58.788645   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:17:58.796302   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.800112   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.800156   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.805418   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:17:58.812620   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:17:58.820133   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.824238   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.824280   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.829346   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:17:58.836768   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:17:58.844364   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.848158   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.848204   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.853444   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:17:58.860527   15352 kubeadm.go:395] StartCluster: {Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:58.860620   15352 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:17:58.889454   15352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:17:58.897140   15352 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:17:58.897153   15352 kubeadm.go:626] restartCluster start
	I0602 11:17:58.897196   15352 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:17:58.903854   15352 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:58.903907   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:58.974750   15352 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220602111648-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:17:58.975016   15352 kubeconfig.go:127] "embed-certs-20220602111648-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:17:58.975368   15352 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:17:58.976710   15352 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:17:58.984402   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:58.984445   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:58.992514   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.194646   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.194824   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.205800   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.394596   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.394711   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.404574   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.592620   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.592742   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.603566   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.792706   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.792789   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.801888   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.992644   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.992738   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.004887   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.194652   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.194785   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.205062   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.394638   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.394783   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.405305   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.593032   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.593156   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.602450   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.793140   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.793270   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.803822   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.992792   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.992919   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.003646   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.194714   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.194891   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.206158   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.393563   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.393610   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.402165   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.593865   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.593962   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.604645   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.794719   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.794882   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.806019   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.993241   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.993427   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:02.004637   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.004647   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:02.004690   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:02.012637   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.012650   15352 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:18:02.012657   15352 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:18:02.012720   15352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:18:02.043235   15352 docker.go:442] Stopping containers: [6b1ddf58ceb9 2443900e874e db141163e6d4 0356cd90224b f1d263c9b0f1 14883f2e0c47 2b1660b40df3 2259cd9108be 1277daa5a30b 8f0298e2ec89 9fa8e7282212 4f92dc954d61 bbe61b313255 6db85ab616c7 703d34253678 d3aedabaf004]
	I0602 11:18:02.043308   15352 ssh_runner.go:195] Run: docker stop 6b1ddf58ceb9 2443900e874e db141163e6d4 0356cd90224b f1d263c9b0f1 14883f2e0c47 2b1660b40df3 2259cd9108be 1277daa5a30b 8f0298e2ec89 9fa8e7282212 4f92dc954d61 bbe61b313255 6db85ab616c7 703d34253678 d3aedabaf004
	I0602 11:18:02.073833   15352 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:18:02.087788   15352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:18:02.095874   15352 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  2 18:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  2 18:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  2 18:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  2 18:17 /etc/kubernetes/scheduler.conf
	
	I0602 11:18:02.095938   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 11:18:02.103319   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 11:18:02.110716   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 11:18:02.117486   15352 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.117534   15352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 11:18:02.124006   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 11:18:02.130595   15352 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.130640   15352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 11:18:02.137026   15352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:18:02.143920   15352 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:18:02.143937   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:02.186111   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:02.940146   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.065256   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.113758   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.165838   15352 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:18:03.165901   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:03.677915   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:04.176018   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:04.191155   15352 api_server.go:71] duration metric: took 1.025302471s to wait for apiserver process to appear ...
	I0602 11:18:04.191173   15352 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:18:04.191182   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:04.192377   15352 api_server.go:256] stopped: https://127.0.0.1:54894/healthz: Get "https://127.0.0.1:54894/healthz": EOF
	I0602 11:18:04.693127   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.094069   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 11:18:07.094108   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 11:18:07.193195   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.202009   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:07.202029   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:07.693364   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.700473   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:07.700494   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:08.192616   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:08.197675   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:08.197689   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:08.692589   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:08.697963   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 200:
	ok
	I0602 11:18:08.704402   15352 api_server.go:140] control plane version: v1.23.6
	I0602 11:18:08.704415   15352 api_server.go:130] duration metric: took 4.513159523s to wait for apiserver health ...
	I0602 11:18:08.704422   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:18:08.704427   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:18:08.704436   15352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:18:08.712420   15352 system_pods.go:59] 8 kube-system pods found
	I0602 11:18:08.712443   15352 system_pods.go:61] "coredns-64897985d-mqhps" [a9db0af0-c7e2-43f0-94d1-285cf82eefc6] Running
	I0602 11:18:08.712450   15352 system_pods.go:61] "etcd-embed-certs-20220602111648-2113" [655c91b8-a19a-4a3d-8fc4-4bb99628728c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 11:18:08.712457   15352 system_pods.go:61] "kube-apiserver-embed-certs-20220602111648-2113" [1c169e07-9698-455b-bc45-fb6268c818dd] Running
	I0602 11:18:08.712463   15352 system_pods.go:61] "kube-controller-manager-embed-certs-20220602111648-2113" [8dabcc9b-0bff-45c0-b617-b673244bb05e] Running
	I0602 11:18:08.712467   15352 system_pods.go:61] "kube-proxy-hxhmn" [0b00b834-77d9-498a-b6f4-73ada68667be] Running
	I0602 11:18:08.712471   15352 system_pods.go:61] "kube-scheduler-embed-certs-20220602111648-2113" [2d987b9c-0f04-4851-bdb4-d9d1eefcc598] Running
	I0602 11:18:08.712481   15352 system_pods.go:61] "metrics-server-b955d9d8-5k65t" [27770582-e78d-4495-83a5-a03c3c22b6ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:18:08.712489   15352 system_pods.go:61] "storage-provisioner" [971f85e7-9555-4ad3-aada-015be49207a6] Running
	I0602 11:18:08.712494   15352 system_pods.go:74] duration metric: took 8.053604ms to wait for pod list to return data ...
	I0602 11:18:08.712501   15352 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:18:08.718457   15352 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:18:08.718474   15352 node_conditions.go:123] node cpu capacity is 6
	I0602 11:18:08.718485   15352 node_conditions.go:105] duration metric: took 5.979977ms to run NodePressure ...
	I0602 11:18:08.718498   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:08.917133   15352 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0602 11:18:08.963399   15352 kubeadm.go:777] kubelet initialised
	I0602 11:18:08.963410   15352 kubeadm.go:778] duration metric: took 46.263216ms waiting for restarted kubelet to initialise ...
	I0602 11:18:08.963418   15352 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:18:08.968510   15352 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-mqhps" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:08.973930   15352 pod_ready.go:92] pod "coredns-64897985d-mqhps" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:08.973941   15352 pod_ready.go:81] duration metric: took 5.418497ms waiting for pod "coredns-64897985d-mqhps" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:08.973947   15352 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:10.987864   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:13.489319   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:15.984994   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:17.985135   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:20.487923   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:20.984961   15352 pod_ready.go:92] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:20.984975   15352 pod_ready.go:81] duration metric: took 12.010814852s waiting for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:20.984981   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:22.996747   15352 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:23.497076   15352 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.497088   15352 pod_ready.go:81] duration metric: took 2.512058532s waiting for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.497094   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.500990   15352 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.500999   15352 pod_ready.go:81] duration metric: took 3.899621ms waiting for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.501005   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hxhmn" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.504762   15352 pod_ready.go:92] pod "kube-proxy-hxhmn" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.504770   15352 pod_ready.go:81] duration metric: took 3.760621ms waiting for pod "kube-proxy-hxhmn" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.504775   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.508796   15352 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.508803   15352 pod_ready.go:81] duration metric: took 4.023396ms waiting for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.508810   15352 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:25.519475   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:28.019880   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:30.021312   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:32.520124   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:35.018464   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:37.019378   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:39.020228   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:41.520520   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:44.019685   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:46.021361   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:48.517860   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:50.519722   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:52.520558   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:55.021033   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:57.518515   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:59.520949   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:01.521775   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:04.020252   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:06.021659   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:08.522036   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:11.019578   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:13.021252   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:15.519890   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:17.522449   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:20.019069   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:22.022494   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:24.519019   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:26.520994   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:29.019342   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:31.021808   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:33.518558   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:35.522527   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:38.019317   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:40.021350   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:42.519178   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:44.522452   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:47.020277   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:49.020861   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:51.021940   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:53.522777   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:56.022962   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:58.023294   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:00.519960   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:02.521430   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:05.022687   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:07.522208   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:10.021463   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:12.519965   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:14.522183   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:17.021383   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:19.023054   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:21.520910   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:23.523643   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:26.021449   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:28.023761   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:30.522348   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:33.024537   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:35.523518   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:37.523926   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:40.023533   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:42.520330   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:44.521363   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:46.523702   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:49.021771   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:51.022021   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:53.022137   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:55.024682   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:57.522459   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:00.022039   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:02.022164   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:04.022963   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:06.023102   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:08.520914   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:10.522452   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:13.022353   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:15.024327   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:17.024604   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:19.024700   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:21.521873   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:24.026794   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:26.523991   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:29.022868   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:31.023261   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:33.023747   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:35.024513   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:37.522052   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:39.523349   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:41.523819   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:44.023580   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:46.524426   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:48.524790   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:51.025030   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:53.522632   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:55.523997   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:57.526073   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:00.025125   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:02.522387   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:04.525282   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:07.024864   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:09.523673   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:11.524761   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:13.525553   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:16.023071   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:18.023459   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:20.525701   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:23.023773   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:23.517112   15352 pod_ready.go:81] duration metric: took 4m0.004136963s waiting for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" ...
	E0602 11:22:23.517134   15352 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" (will not retry!)
	I0602 11:22:23.517161   15352 pod_ready.go:38] duration metric: took 4m14.54933227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:22:23.517193   15352 kubeadm.go:630] restartCluster took 4m24.615456672s
	W0602 11:22:23.517311   15352 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0602 11:22:23.517339   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:23:01.958873   15352 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.440855806s)
	I0602 11:23:01.958935   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:23:01.968583   15352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:23:01.976178   15352 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:23:01.976221   15352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:23:01.983698   15352 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:23:01.983724   15352 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:23:02.466453   15352 out.go:204]   - Generating certificates and keys ...
	I0602 11:23:03.315809   15352 out.go:204]   - Booting up control plane ...
	I0602 11:23:09.371051   15352 out.go:204]   - Configuring RBAC rules ...
	I0602 11:23:09.860945   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:23:09.860961   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:23:09.860985   15352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 11:23:09.861071   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:09.861074   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=embed-certs-20220602111648-2113 minikube.k8s.io/updated_at=2022_06_02T11_23_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:09.876275   15352 ops.go:34] apiserver oom_adj: -16
	I0602 11:23:10.001073   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:10.577726   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:11.076447   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:11.576911   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:12.076437   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:12.576329   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:13.076844   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:13.577067   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:14.078221   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:14.576893   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:15.076756   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:15.576823   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:16.077283   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:16.577898   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:17.078403   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:17.577411   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:18.077781   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:18.576420   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:19.076555   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:19.576528   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:20.076412   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:20.578098   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:21.077009   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:21.576491   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:22.076566   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:22.576477   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:23.076580   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:23.131668   15352 kubeadm.go:1045] duration metric: took 13.27043884s to wait for elevateKubeSystemPrivileges.
	I0602 11:23:23.131685   15352 kubeadm.go:397] StartCluster complete in 5m24.265555176s
	I0602 11:23:23.131703   15352 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:23:23.131777   15352 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:23:23.132516   15352 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:23:23.648470   15352 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220602111648-2113" rescaled to 1
	I0602 11:23:23.648513   15352 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 11:23:23.648518   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 11:23:23.648543   15352 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 11:23:23.648750   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:23:23.688318   15352 out.go:177] * Verifying Kubernetes components...
	I0602 11:23:23.688398   15352 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.688412   15352 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.688416   15352 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.747362   15352 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220602111648-2113"
	I0602 11:23:23.747382   15352 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220602111648-2113"
	W0602 11:23:23.747391   15352 addons.go:165] addon metrics-server should already be in state true
	I0602 11:23:23.688418   15352 addons.go:65] Setting dashboard=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.747430   15352 addons.go:153] Setting addon dashboard=true in "embed-certs-20220602111648-2113"
	I0602 11:23:23.747436   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	I0602 11:23:23.747346   15352 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220602111648-2113"
	W0602 11:23:23.747460   15352 addons.go:165] addon storage-provisioner should already be in state true
	I0602 11:23:23.747347   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:23:23.747489   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	W0602 11:23:23.747442   15352 addons.go:165] addon dashboard should already be in state true
	I0602 11:23:23.747558   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	I0602 11:23:23.747757   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.747877   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.748464   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.748914   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.764413   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 11:23:23.774077   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:23.869691   15352 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220602111648-2113"
	W0602 11:23:23.930483   15352 addons.go:165] addon default-storageclass should already be in state true
	I0602 11:23:23.888755   15352 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 11:23:23.930518   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	I0602 11:23:23.909477   15352 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 11:23:23.930420   15352 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 11:23:23.931179   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.963267   15352 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220602111648-2113" to be "Ready" ...
	I0602 11:23:23.988507   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 11:23:24.030625   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 11:23:24.009644   15352 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:23:24.030682   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 11:23:24.030501   15352 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0602 11:23:24.030848   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.030873   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.052496   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 11:23:24.052546   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 11:23:24.053233   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.057714   15352 node_ready.go:49] node "embed-certs-20220602111648-2113" has status "Ready":"True"
	I0602 11:23:24.057730   15352 node_ready.go:38] duration metric: took 27.16478ms waiting for node "embed-certs-20220602111648-2113" to be "Ready" ...
	I0602 11:23:24.057737   15352 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:23:24.066339   15352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-ps5fw" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:24.074444   15352 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 11:23:24.074462   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 11:23:24.074538   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.146535   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.147249   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.153394   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.156726   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.239554   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 11:23:24.239569   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 11:23:24.241122   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:23:24.252061   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 11:23:24.253413   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 11:23:24.253424   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 11:23:24.256097   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 11:23:24.256112   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 11:23:24.271732   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 11:23:24.271747   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 11:23:24.344571   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 11:23:24.344589   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 11:23:24.351954   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:23:24.351970   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 11:23:24.368890   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 11:23:24.368902   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 11:23:24.454221   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 11:23:24.454235   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 11:23:24.454670   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:23:24.471327   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 11:23:24.471339   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 11:23:24.485285   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 11:23:24.485299   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 11:23:24.548818   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 11:23:24.548835   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 11:23:24.644195   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:23:24.644208   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 11:23:24.675479   15352 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 11:23:24.679579   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:23:24.942852   15352 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220602111648-2113"
	I0602 11:23:25.565936   15352 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0602 11:23:25.587209   15352 addons.go:417] enableAddons completed in 1.938607775s
	I0602 11:23:26.083086   15352 pod_ready.go:102] pod "coredns-64897985d-ps5fw" in "kube-system" namespace has status "Ready":"False"
	I0602 11:23:26.585079   15352 pod_ready.go:92] pod "coredns-64897985d-ps5fw" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.585092   15352 pod_ready.go:81] duration metric: took 2.518690418s waiting for pod "coredns-64897985d-ps5fw" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.585099   15352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-zhfn8" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.589749   15352 pod_ready.go:92] pod "coredns-64897985d-zhfn8" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.589758   15352 pod_ready.go:81] duration metric: took 4.642896ms waiting for pod "coredns-64897985d-zhfn8" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.589768   15352 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.593913   15352 pod_ready.go:92] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.593921   15352 pod_ready.go:81] duration metric: took 4.149186ms waiting for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.593929   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.598343   15352 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.598352   15352 pod_ready.go:81] duration metric: took 4.418374ms waiting for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.598358   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.603237   15352 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.603246   15352 pod_ready.go:81] duration metric: took 4.883426ms waiting for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.603253   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gcmn9" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.983215   15352 pod_ready.go:92] pod "kube-proxy-gcmn9" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.983225   15352 pod_ready.go:81] duration metric: took 379.960719ms waiting for pod "kube-proxy-gcmn9" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.983235   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:27.383538   15352 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:27.383549   15352 pod_ready.go:81] duration metric: took 400.30138ms waiting for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:27.383554   15352 pod_ready.go:38] duration metric: took 3.325734057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:23:27.383567   15352 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:23:27.383609   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:23:27.397670   15352 api_server.go:71] duration metric: took 3.749073932s to wait for apiserver process to appear ...
	I0602 11:23:27.397685   15352 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:23:27.397693   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:23:27.402642   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 200:
	ok
	I0602 11:23:27.403842   15352 api_server.go:140] control plane version: v1.23.6
	I0602 11:23:27.403853   15352 api_server.go:130] duration metric: took 6.160965ms to wait for apiserver health ...
	I0602 11:23:27.403858   15352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:23:27.585352   15352 system_pods.go:59] 9 kube-system pods found
	I0602 11:23:27.585372   15352 system_pods.go:61] "coredns-64897985d-ps5fw" [dca916a9-6a4a-407e-af4d-19f98f5aa6c4] Running
	I0602 11:23:27.585400   15352 system_pods.go:61] "coredns-64897985d-zhfn8" [c17ca662-7b52-40a8-b1b1-661983c183d4] Running
	I0602 11:23:27.585405   15352 system_pods.go:61] "etcd-embed-certs-20220602111648-2113" [729f1076-c1d1-40f2-8c74-0716513f8c59] Running
	I0602 11:23:27.585411   15352 system_pods.go:61] "kube-apiserver-embed-certs-20220602111648-2113" [0e9e3a9d-e57f-48f8-a66e-d51393f9e509] Running
	I0602 11:23:27.585416   15352 system_pods.go:61] "kube-controller-manager-embed-certs-20220602111648-2113" [f15aadc2-e920-484a-bb54-c1db87cf9b51] Running
	I0602 11:23:27.585423   15352 system_pods.go:61] "kube-proxy-gcmn9" [9f001538-3e2b-455a-999c-bbb8b7ce2082] Running
	I0602 11:23:27.585430   15352 system_pods.go:61] "kube-scheduler-embed-certs-20220602111648-2113" [7d78f2d1-2fd3-4d17-a604-123c557dc94b] Running
	I0602 11:23:27.585435   15352 system_pods.go:61] "metrics-server-b955d9d8-d6jzn" [2e3f5fb8-e6aa-41f3-a689-f4ebd249a466] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:23:27.585442   15352 system_pods.go:61] "storage-provisioner" [37849889-6793-4475-a0b1-28f0412b616e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:23:27.585450   15352 system_pods.go:74] duration metric: took 181.583327ms to wait for pod list to return data ...
	I0602 11:23:27.585460   15352 default_sa.go:34] waiting for default service account to be created ...
	I0602 11:23:27.780881   15352 default_sa.go:45] found service account: "default"
	I0602 11:23:27.780894   15352 default_sa.go:55] duration metric: took 195.425402ms for default service account to be created ...
	I0602 11:23:27.780901   15352 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 11:23:27.983547   15352 system_pods.go:86] 9 kube-system pods found
	I0602 11:23:27.983561   15352 system_pods.go:89] "coredns-64897985d-ps5fw" [dca916a9-6a4a-407e-af4d-19f98f5aa6c4] Running
	I0602 11:23:27.983566   15352 system_pods.go:89] "coredns-64897985d-zhfn8" [c17ca662-7b52-40a8-b1b1-661983c183d4] Running
	I0602 11:23:27.983569   15352 system_pods.go:89] "etcd-embed-certs-20220602111648-2113" [729f1076-c1d1-40f2-8c74-0716513f8c59] Running
	I0602 11:23:27.983582   15352 system_pods.go:89] "kube-apiserver-embed-certs-20220602111648-2113" [0e9e3a9d-e57f-48f8-a66e-d51393f9e509] Running
	I0602 11:23:27.983587   15352 system_pods.go:89] "kube-controller-manager-embed-certs-20220602111648-2113" [f15aadc2-e920-484a-bb54-c1db87cf9b51] Running
	I0602 11:23:27.983591   15352 system_pods.go:89] "kube-proxy-gcmn9" [9f001538-3e2b-455a-999c-bbb8b7ce2082] Running
	I0602 11:23:27.983597   15352 system_pods.go:89] "kube-scheduler-embed-certs-20220602111648-2113" [7d78f2d1-2fd3-4d17-a604-123c557dc94b] Running
	I0602 11:23:27.983604   15352 system_pods.go:89] "metrics-server-b955d9d8-d6jzn" [2e3f5fb8-e6aa-41f3-a689-f4ebd249a466] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:23:27.983611   15352 system_pods.go:89] "storage-provisioner" [37849889-6793-4475-a0b1-28f0412b616e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:23:27.983617   15352 system_pods.go:126] duration metric: took 202.708238ms to wait for k8s-apps to be running ...
	I0602 11:23:27.983624   15352 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 11:23:27.983673   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:23:27.996729   15352 system_svc.go:56] duration metric: took 13.098129ms WaitForService to wait for kubelet.
	I0602 11:23:27.996748   15352 kubeadm.go:572] duration metric: took 4.348143989s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 11:23:27.996779   15352 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:23:28.181774   15352 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:23:28.181787   15352 node_conditions.go:123] node cpu capacity is 6
	I0602 11:23:28.181794   15352 node_conditions.go:105] duration metric: took 184.999302ms to run NodePressure ...
	I0602 11:23:28.181802   15352 start.go:213] waiting for startup goroutines ...
	I0602 11:23:28.214838   15352 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 11:23:28.236679   15352 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220602111648-2113" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:04:51 UTC, end at Thu 2022-06-02 18:31:48 UTC. --
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 systemd[1]: Starting Docker Application Container Engine...
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.822221462Z" level=info msg="Starting up"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824058418Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824139651Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824195269Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.824296574Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825626806Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825660593Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825673330Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.825685292Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.830709849Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.834670305Z" level=info msg="Loading containers: start."
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.916131885Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.947713032Z" level=info msg="Loading containers: done."
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.958029440Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.958093467Z" level=info msg="Daemon has completed initialization"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 systemd[1]: Started Docker Application Container Engine.
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.983186383Z" level=info msg="API listen on [::]:2376"
	Jun 02 18:04:51 old-k8s-version-20220602105906-2113 dockerd[130]: time="2022-06-02T18:04:51.985769795Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-06-02T18:31:50Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:31:50 up  1:20,  0 users,  load average: 0.30, 0.52, 0.76
	Linux old-k8s-version-20220602105906-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:04:51 UTC, end at Thu 2022-06-02 18:31:50 UTC. --
	Jun 02 18:31:48 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 kubelet[34084]: I0602 18:31:49.533531   34084 server.go:410] Version: v1.16.0
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 kubelet[34084]: I0602 18:31:49.533741   34084 plugins.go:100] No cloud provider specified.
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 kubelet[34084]: I0602 18:31:49.533754   34084 server.go:773] Client rotation is on, will bootstrap in background
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 kubelet[34084]: I0602 18:31:49.535446   34084 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 kubelet[34084]: W0602 18:31:49.536104   34084 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 kubelet[34084]: W0602 18:31:49.536167   34084 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 kubelet[34084]: F0602 18:31:49.536226   34084 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 02 18:31:49 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 kubelet[34097]: I0602 18:31:50.286021   34097 server.go:410] Version: v1.16.0
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 kubelet[34097]: I0602 18:31:50.286653   34097 plugins.go:100] No cloud provider specified.
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 kubelet[34097]: I0602 18:31:50.286748   34097 server.go:773] Client rotation is on, will bootstrap in background
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 kubelet[34097]: I0602 18:31:50.288813   34097 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 kubelet[34097]: W0602 18:31:50.289585   34097 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 kubelet[34097]: W0602 18:31:50.289675   34097 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 kubelet[34097]: F0602 18:31:50.289738   34097 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 02 18:31:50 old-k8s-version-20220602105906-2113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 11:31:50.571148   15950 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 2 (433.592242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220602105906-2113" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (43.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220602111648-2113 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113: exit status 2 (16.10163811s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113: exit status 2 (16.107113748s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220602111648-2113 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220602111648-2113
helpers_test.go:235: (dbg) docker inspect embed-certs-20220602111648-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc",
	        "Created": "2022-06-02T18:16:55.180494539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:17:55.590678725Z",
	            "FinishedAt": "2022-06-02T18:17:53.678894097Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc/hostname",
	        "HostsPath": "/var/lib/docker/containers/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc/hosts",
	        "LogPath": "/var/lib/docker/containers/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc-json.log",
	        "Name": "/embed-certs-20220602111648-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220602111648-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220602111648-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/56c0a32a20b715226dc33286682566f4d50b9dbbd31b44aa22b7e47f39f8584e-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56c0a32a20b715226dc33286682566f4d50b9dbbd31b44aa22b7e47f39f8584e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56c0a32a20b715226dc33286682566f4d50b9dbbd31b44aa22b7e47f39f8584e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56c0a32a20b715226dc33286682566f4d50b9dbbd31b44aa22b7e47f39f8584e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220602111648-2113",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220602111648-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220602111648-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220602111648-2113",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220602111648-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c13705557d1f6cadd9af527c7a6ad6f4165ee0fc8b7c3fb7ca9a32dc1edfd3c1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54890"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54891"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54893"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54894"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c13705557d1f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220602111648-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "82b9747ec857",
	                        "embed-certs-20220602111648-2113"
	                    ],
	                    "NetworkID": "7fc7fa81ba697d96b69d01d51b7eeadbfdb988accd570d0531903136042ab048",
	                    "EndpointID": "ebabd5ebeebd48a69586e249f7afcb29191d268e378157ab0064386cc61033d0",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220602111648-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220602111648-2113 logs -n 25: (2.746315295s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | default-k8s-different-port-20220602110711-2113             | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220602110711-2113             | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220602111446-2113                             | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220602111446-2113                             | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	| start   | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220602105906-2113                        | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:22 PDT | 02 Jun 22 11:22 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:23 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:23 PDT | 02 Jun 22 11:23 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:23 PDT | 02 Jun 22 11:23 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:24 PDT | 02 Jun 22 11:24 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:17:54
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:17:54.298706   15352 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:17:54.298896   15352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:17:54.298901   15352 out.go:309] Setting ErrFile to fd 2...
	I0602 11:17:54.298905   15352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:17:54.299002   15352 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:17:54.299282   15352 out.go:303] Setting JSON to false
	I0602 11:17:54.314716   15352 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4643,"bootTime":1654189231,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:17:54.314829   15352 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:17:54.336522   15352 out.go:177] * [embed-certs-20220602111648-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:17:54.379858   15352 notify.go:193] Checking for updates...
	I0602 11:17:54.401338   15352 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:17:54.422430   15352 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:17:54.443822   15352 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:17:54.465706   15352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:17:54.487842   15352 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:17:54.510345   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:17:54.511006   15352 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:17:54.583879   15352 docker.go:137] docker version: linux-20.10.14
	I0602 11:17:54.584008   15352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:17:54.710496   15352 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:17:54.661726472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:17:54.732441   15352 out.go:177] * Using the docker driver based on existing profile
	I0602 11:17:54.754261   15352 start.go:284] selected driver: docker
	I0602 11:17:54.754294   15352 start.go:806] validating driver "docker" against &{Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:54.754438   15352 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:17:54.757822   15352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:17:54.886547   15352 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:17:54.836693909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:17:54.886708   15352 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:17:54.886725   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:17:54.886733   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:17:54.886755   15352 start_flags.go:306] config:
	{Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:54.930397   15352 out.go:177] * Starting control plane node embed-certs-20220602111648-2113 in cluster embed-certs-20220602111648-2113
	I0602 11:17:54.952534   15352 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:17:54.974462   15352 out.go:177] * Pulling base image ...
	I0602 11:17:55.016639   15352 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:17:55.016641   15352 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:17:55.016722   15352 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 11:17:55.016736   15352 cache.go:57] Caching tarball of preloaded images
	I0602 11:17:55.016927   15352 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:17:55.016959   15352 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 11:17:55.017969   15352 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/config.json ...
	I0602 11:17:55.082071   15352 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:17:55.082088   15352 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:17:55.082098   15352 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:17:55.082139   15352 start.go:352] acquiring machines lock for embed-certs-20220602111648-2113: {Name:mk14ff68897b305c2bdfb36f1ceaa58ce32379a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:17:55.082233   15352 start.go:356] acquired machines lock for "embed-certs-20220602111648-2113" in 73.195µs
	I0602 11:17:55.082254   15352 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:17:55.082263   15352 fix.go:55] fixHost starting: 
	I0602 11:17:55.082507   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:17:55.149317   15352 fix.go:103] recreateIfNeeded on embed-certs-20220602111648-2113: state=Stopped err=<nil>
	W0602 11:17:55.149352   15352 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:17:55.192959   15352 out.go:177] * Restarting existing docker container for "embed-certs-20220602111648-2113" ...
	I0602 11:17:55.214224   15352 cli_runner.go:164] Run: docker start embed-certs-20220602111648-2113
	I0602 11:17:55.579016   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:17:55.651976   15352 kic.go:416] container "embed-certs-20220602111648-2113" state is running.
	I0602 11:17:55.652516   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:55.726686   15352 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/config.json ...
	I0602 11:17:55.727067   15352 machine.go:88] provisioning docker machine ...
	I0602 11:17:55.727092   15352 ubuntu.go:169] provisioning hostname "embed-certs-20220602111648-2113"
	I0602 11:17:55.727154   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:55.800251   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:55.800475   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:55.800489   15352 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220602111648-2113 && echo "embed-certs-20220602111648-2113" | sudo tee /etc/hostname
	I0602 11:17:55.940753   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220602111648-2113
	
	I0602 11:17:55.940849   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.013703   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.013881   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.013895   15352 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220602111648-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220602111648-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220602111648-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:17:56.130458   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:17:56.130490   15352 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:17:56.130508   15352 ubuntu.go:177] setting up certificates
	I0602 11:17:56.130518   15352 provision.go:83] configureAuth start
	I0602 11:17:56.130590   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:56.202522   15352 provision.go:138] copyHostCerts
	I0602 11:17:56.202610   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:17:56.202620   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:17:56.202707   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:17:56.202956   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:17:56.202966   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:17:56.203024   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:17:56.203210   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:17:56.203230   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:17:56.203292   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:17:56.203402   15352 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220602111648-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220602111648-2113]
	I0602 11:17:56.290352   15352 provision.go:172] copyRemoteCerts
	I0602 11:17:56.290417   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:17:56.290462   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.363098   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:56.448844   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:17:56.468413   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:17:56.487167   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0602 11:17:56.504244   15352 provision.go:86] duration metric: configureAuth took 373.70854ms
	I0602 11:17:56.504257   15352 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:17:56.504400   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:17:56.504454   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.574726   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.574873   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.574883   15352 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:17:56.692552   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:17:56.692565   15352 ubuntu.go:71] root file system type: overlay
	I0602 11:17:56.692719   15352 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:17:56.692794   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.763208   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.763366   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.763424   15352 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:17:56.888442   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:17:56.888522   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.959173   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.959343   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.959378   15352 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:17:57.080070   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:17:57.080081   15352 machine.go:91] provisioned docker machine in 1.352983871s
	I0602 11:17:57.080092   15352 start.go:306] post-start starting for "embed-certs-20220602111648-2113" (driver="docker")
	I0602 11:17:57.080099   15352 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:17:57.080167   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:17:57.080224   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.150320   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.237169   15352 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:17:57.240932   15352 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:17:57.240947   15352 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:17:57.240960   15352 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:17:57.240965   15352 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:17:57.240973   15352 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:17:57.241075   15352 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:17:57.241205   15352 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:17:57.241347   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:17:57.249423   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:17:57.266686   15352 start.go:309] post-start completed in 186.579963ms
	I0602 11:17:57.266764   15352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:17:57.266809   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.337389   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.419423   15352 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:17:57.423756   15352 fix.go:57] fixHost completed within 2.341450978s
	I0602 11:17:57.423771   15352 start.go:81] releasing machines lock for "embed-certs-20220602111648-2113", held for 2.341488916s
	I0602 11:17:57.423846   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:57.493832   15352 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:17:57.493842   15352 ssh_runner.go:195] Run: systemctl --version
	I0602 11:17:57.493909   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.493898   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.571385   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.572948   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.784521   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:17:57.797372   15352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:17:57.806989   15352 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:17:57.807041   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:17:57.816005   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:17:57.829060   15352 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:17:57.898903   15352 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:17:57.967953   15352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:17:57.977779   15352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:17:58.050651   15352 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:17:58.060254   15352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:17:58.095467   15352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:17:58.172409   15352 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 11:17:58.172543   15352 cli_runner.go:164] Run: docker exec -t embed-certs-20220602111648-2113 dig +short host.docker.internal
	I0602 11:17:58.301503   15352 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:17:58.301604   15352 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:17:58.305905   15352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:17:58.316714   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:58.387831   15352 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:17:58.387911   15352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:17:58.416852   15352 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:17:58.416866   15352 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:17:58.416944   15352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:17:58.447690   15352 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:17:58.447713   15352 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:17:58.447820   15352 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:17:58.520455   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:17:58.520468   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:17:58.520483   15352 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:17:58.520502   15352 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220602111648-2113 NodeName:embed-certs-20220602111648-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:17:58.520613   15352 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220602111648-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:17:58.520681   15352 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220602111648-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 11:17:58.520742   15352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 11:17:58.528337   15352 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:17:58.528400   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:17:58.535248   15352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0602 11:17:58.547429   15352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:17:58.559653   15352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0602 11:17:58.572912   15352 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:17:58.576677   15352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:17:58.585837   15352 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113 for IP: 192.168.58.2
	I0602 11:17:58.585959   15352 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:17:58.586013   15352 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:17:58.586093   15352 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/client.key
	I0602 11:17:58.586153   15352 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.key.cee25041
	I0602 11:17:58.586215   15352 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.key
	I0602 11:17:58.586412   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:17:58.586453   15352 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:17:58.586477   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:17:58.586519   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:17:58.586551   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:17:58.586580   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:17:58.586639   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:17:58.587181   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:17:58.604132   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 11:17:58.620640   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:17:58.637561   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:17:58.654357   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:17:58.671422   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:17:58.687905   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:17:58.704559   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:17:58.721152   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:17:58.738095   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:17:58.754705   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:17:58.771067   15352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:17:58.783467   15352 ssh_runner.go:195] Run: openssl version
	I0602 11:17:58.788645   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:17:58.796302   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.800112   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.800156   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.805418   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:17:58.812620   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:17:58.820133   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.824238   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.824280   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.829346   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:17:58.836768   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:17:58.844364   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.848158   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.848204   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.853444   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:17:58.860527   15352 kubeadm.go:395] StartCluster: {Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:58.860620   15352 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:17:58.889454   15352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:17:58.897140   15352 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:17:58.897153   15352 kubeadm.go:626] restartCluster start
	I0602 11:17:58.897196   15352 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:17:58.903854   15352 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:58.903907   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:58.974750   15352 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220602111648-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:17:58.975016   15352 kubeconfig.go:127] "embed-certs-20220602111648-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:17:58.975368   15352 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:17:58.976710   15352 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:17:58.984402   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:58.984445   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:58.992514   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.194646   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.194824   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.205800   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.394596   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.394711   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.404574   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.592620   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.592742   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.603566   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.792706   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.792789   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.801888   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.992644   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.992738   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.004887   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.194652   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.194785   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.205062   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.394638   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.394783   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.405305   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.593032   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.593156   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.602450   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.793140   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.793270   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.803822   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.992792   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.992919   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.003646   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.194714   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.194891   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.206158   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.393563   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.393610   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.402165   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.593865   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.593962   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.604645   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.794719   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.794882   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.806019   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.993241   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.993427   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:02.004637   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.004647   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:02.004690   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:02.012637   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.012650   15352 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:18:02.012657   15352 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:18:02.012720   15352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:18:02.043235   15352 docker.go:442] Stopping containers: [6b1ddf58ceb9 2443900e874e db141163e6d4 0356cd90224b f1d263c9b0f1 14883f2e0c47 2b1660b40df3 2259cd9108be 1277daa5a30b 8f0298e2ec89 9fa8e7282212 4f92dc954d61 bbe61b313255 6db85ab616c7 703d34253678 d3aedabaf004]
	I0602 11:18:02.043308   15352 ssh_runner.go:195] Run: docker stop 6b1ddf58ceb9 2443900e874e db141163e6d4 0356cd90224b f1d263c9b0f1 14883f2e0c47 2b1660b40df3 2259cd9108be 1277daa5a30b 8f0298e2ec89 9fa8e7282212 4f92dc954d61 bbe61b313255 6db85ab616c7 703d34253678 d3aedabaf004
	I0602 11:18:02.073833   15352 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:18:02.087788   15352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:18:02.095874   15352 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  2 18:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  2 18:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  2 18:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  2 18:17 /etc/kubernetes/scheduler.conf
	
	I0602 11:18:02.095938   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 11:18:02.103319   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 11:18:02.110716   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 11:18:02.117486   15352 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.117534   15352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 11:18:02.124006   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 11:18:02.130595   15352 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.130640   15352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 11:18:02.137026   15352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:18:02.143920   15352 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:18:02.143937   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:02.186111   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:02.940146   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.065256   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.113758   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.165838   15352 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:18:03.165901   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:03.677915   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:04.176018   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:04.191155   15352 api_server.go:71] duration metric: took 1.025302471s to wait for apiserver process to appear ...
	I0602 11:18:04.191173   15352 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:18:04.191182   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:04.192377   15352 api_server.go:256] stopped: https://127.0.0.1:54894/healthz: Get "https://127.0.0.1:54894/healthz": EOF
	I0602 11:18:04.693127   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.094069   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 11:18:07.094108   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 11:18:07.193195   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.202009   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:07.202029   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:07.693364   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.700473   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:07.700494   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:08.192616   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:08.197675   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:08.197689   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:08.692589   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:08.697963   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 200:
	ok
	I0602 11:18:08.704402   15352 api_server.go:140] control plane version: v1.23.6
	I0602 11:18:08.704415   15352 api_server.go:130] duration metric: took 4.513159523s to wait for apiserver health ...
	I0602 11:18:08.704422   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:18:08.704427   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:18:08.704436   15352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:18:08.712420   15352 system_pods.go:59] 8 kube-system pods found
	I0602 11:18:08.712443   15352 system_pods.go:61] "coredns-64897985d-mqhps" [a9db0af0-c7e2-43f0-94d1-285cf82eefc6] Running
	I0602 11:18:08.712450   15352 system_pods.go:61] "etcd-embed-certs-20220602111648-2113" [655c91b8-a19a-4a3d-8fc4-4bb99628728c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 11:18:08.712457   15352 system_pods.go:61] "kube-apiserver-embed-certs-20220602111648-2113" [1c169e07-9698-455b-bc45-fb6268c818dd] Running
	I0602 11:18:08.712463   15352 system_pods.go:61] "kube-controller-manager-embed-certs-20220602111648-2113" [8dabcc9b-0bff-45c0-b617-b673244bb05e] Running
	I0602 11:18:08.712467   15352 system_pods.go:61] "kube-proxy-hxhmn" [0b00b834-77d9-498a-b6f4-73ada68667be] Running
	I0602 11:18:08.712471   15352 system_pods.go:61] "kube-scheduler-embed-certs-20220602111648-2113" [2d987b9c-0f04-4851-bdb4-d9d1eefcc598] Running
	I0602 11:18:08.712481   15352 system_pods.go:61] "metrics-server-b955d9d8-5k65t" [27770582-e78d-4495-83a5-a03c3c22b6ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:18:08.712489   15352 system_pods.go:61] "storage-provisioner" [971f85e7-9555-4ad3-aada-015be49207a6] Running
	I0602 11:18:08.712494   15352 system_pods.go:74] duration metric: took 8.053604ms to wait for pod list to return data ...
	I0602 11:18:08.712501   15352 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:18:08.718457   15352 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:18:08.718474   15352 node_conditions.go:123] node cpu capacity is 6
	I0602 11:18:08.718485   15352 node_conditions.go:105] duration metric: took 5.979977ms to run NodePressure ...
	I0602 11:18:08.718498   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:08.917133   15352 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0602 11:18:08.963399   15352 kubeadm.go:777] kubelet initialised
	I0602 11:18:08.963410   15352 kubeadm.go:778] duration metric: took 46.263216ms waiting for restarted kubelet to initialise ...
	I0602 11:18:08.963418   15352 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:18:08.968510   15352 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-mqhps" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:08.973930   15352 pod_ready.go:92] pod "coredns-64897985d-mqhps" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:08.973941   15352 pod_ready.go:81] duration metric: took 5.418497ms waiting for pod "coredns-64897985d-mqhps" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:08.973947   15352 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:10.987864   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:13.489319   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:15.984994   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:17.985135   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:20.487923   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:20.984961   15352 pod_ready.go:92] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:20.984975   15352 pod_ready.go:81] duration metric: took 12.010814852s waiting for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:20.984981   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:22.996747   15352 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:23.497076   15352 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.497088   15352 pod_ready.go:81] duration metric: took 2.512058532s waiting for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.497094   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.500990   15352 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.500999   15352 pod_ready.go:81] duration metric: took 3.899621ms waiting for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.501005   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hxhmn" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.504762   15352 pod_ready.go:92] pod "kube-proxy-hxhmn" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.504770   15352 pod_ready.go:81] duration metric: took 3.760621ms waiting for pod "kube-proxy-hxhmn" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.504775   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.508796   15352 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.508803   15352 pod_ready.go:81] duration metric: took 4.023396ms waiting for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.508810   15352 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:25.519475   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:28.019880   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:30.021312   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:32.520124   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:35.018464   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:37.019378   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:39.020228   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:41.520520   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:44.019685   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:46.021361   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:48.517860   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:50.519722   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:52.520558   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:55.021033   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:57.518515   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:59.520949   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:01.521775   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:04.020252   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:06.021659   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:08.522036   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:11.019578   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:13.021252   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:15.519890   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:17.522449   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:20.019069   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:22.022494   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:24.519019   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:26.520994   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:29.019342   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:31.021808   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:33.518558   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:35.522527   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:38.019317   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:40.021350   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:42.519178   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:44.522452   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:47.020277   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:49.020861   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:51.021940   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:53.522777   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:56.022962   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:58.023294   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:00.519960   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:02.521430   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:05.022687   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:07.522208   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:10.021463   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:12.519965   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:14.522183   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:17.021383   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:19.023054   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:21.520910   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:23.523643   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:26.021449   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:28.023761   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:30.522348   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:33.024537   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:35.523518   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:37.523926   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:40.023533   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:42.520330   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:44.521363   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:46.523702   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:49.021771   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:51.022021   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:53.022137   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:55.024682   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:57.522459   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:00.022039   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:02.022164   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:04.022963   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:06.023102   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:08.520914   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:10.522452   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:13.022353   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:15.024327   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:17.024604   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:19.024700   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:21.521873   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:24.026794   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:26.523991   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:29.022868   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:31.023261   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:33.023747   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:35.024513   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:37.522052   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:39.523349   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:41.523819   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:44.023580   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:46.524426   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:48.524790   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:51.025030   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:53.522632   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:55.523997   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:57.526073   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:00.025125   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:02.522387   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:04.525282   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:07.024864   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:09.523673   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:11.524761   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:13.525553   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:16.023071   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:18.023459   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:20.525701   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:23.023773   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:23.517112   15352 pod_ready.go:81] duration metric: took 4m0.004136963s waiting for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" ...
	E0602 11:22:23.517134   15352 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" (will not retry!)
	I0602 11:22:23.517161   15352 pod_ready.go:38] duration metric: took 4m14.54933227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:22:23.517193   15352 kubeadm.go:630] restartCluster took 4m24.615456672s
	W0602 11:22:23.517311   15352 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0602 11:22:23.517339   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:23:01.958873   15352 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.440855806s)
	I0602 11:23:01.958935   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:23:01.968583   15352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:23:01.976178   15352 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:23:01.976221   15352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:23:01.983698   15352 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:23:01.983724   15352 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:23:02.466453   15352 out.go:204]   - Generating certificates and keys ...
	I0602 11:23:03.315809   15352 out.go:204]   - Booting up control plane ...
	I0602 11:23:09.371051   15352 out.go:204]   - Configuring RBAC rules ...
	I0602 11:23:09.860945   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:23:09.860961   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:23:09.860985   15352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 11:23:09.861071   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:09.861074   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=embed-certs-20220602111648-2113 minikube.k8s.io/updated_at=2022_06_02T11_23_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:09.876275   15352 ops.go:34] apiserver oom_adj: -16
	I0602 11:23:10.001073   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:10.577726   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:11.076447   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:11.576911   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:12.076437   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:12.576329   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:13.076844   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:13.577067   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:14.078221   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:14.576893   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:15.076756   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:15.576823   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:16.077283   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:16.577898   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:17.078403   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:17.577411   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:18.077781   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:18.576420   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:19.076555   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:19.576528   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:20.076412   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:20.578098   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:21.077009   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:21.576491   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:22.076566   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:22.576477   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:23.076580   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:23.131668   15352 kubeadm.go:1045] duration metric: took 13.27043884s to wait for elevateKubeSystemPrivileges.
	I0602 11:23:23.131685   15352 kubeadm.go:397] StartCluster complete in 5m24.265555176s
	I0602 11:23:23.131703   15352 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:23:23.131777   15352 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:23:23.132516   15352 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:23:23.648470   15352 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220602111648-2113" rescaled to 1
	I0602 11:23:23.648513   15352 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 11:23:23.648518   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 11:23:23.648543   15352 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 11:23:23.648750   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:23:23.688318   15352 out.go:177] * Verifying Kubernetes components...
	I0602 11:23:23.688398   15352 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.688412   15352 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.688416   15352 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.747362   15352 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220602111648-2113"
	I0602 11:23:23.747382   15352 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220602111648-2113"
	W0602 11:23:23.747391   15352 addons.go:165] addon metrics-server should already be in state true
	I0602 11:23:23.688418   15352 addons.go:65] Setting dashboard=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.747430   15352 addons.go:153] Setting addon dashboard=true in "embed-certs-20220602111648-2113"
	I0602 11:23:23.747436   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	I0602 11:23:23.747346   15352 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220602111648-2113"
	W0602 11:23:23.747460   15352 addons.go:165] addon storage-provisioner should already be in state true
	I0602 11:23:23.747347   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:23:23.747489   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	W0602 11:23:23.747442   15352 addons.go:165] addon dashboard should already be in state true
	I0602 11:23:23.747558   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	I0602 11:23:23.747757   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.747877   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.748464   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.748914   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.764413   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 11:23:23.774077   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:23.869691   15352 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220602111648-2113"
	W0602 11:23:23.930483   15352 addons.go:165] addon default-storageclass should already be in state true
	I0602 11:23:23.888755   15352 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 11:23:23.930518   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	I0602 11:23:23.909477   15352 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 11:23:23.930420   15352 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 11:23:23.931179   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.963267   15352 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220602111648-2113" to be "Ready" ...
	I0602 11:23:23.988507   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 11:23:24.030625   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 11:23:24.009644   15352 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:23:24.030682   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 11:23:24.030501   15352 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0602 11:23:24.030848   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.030873   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.052496   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 11:23:24.052546   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 11:23:24.053233   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.057714   15352 node_ready.go:49] node "embed-certs-20220602111648-2113" has status "Ready":"True"
	I0602 11:23:24.057730   15352 node_ready.go:38] duration metric: took 27.16478ms waiting for node "embed-certs-20220602111648-2113" to be "Ready" ...
	I0602 11:23:24.057737   15352 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:23:24.066339   15352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-ps5fw" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:24.074444   15352 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 11:23:24.074462   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 11:23:24.074538   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.146535   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.147249   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.153394   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.156726   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.239554   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 11:23:24.239569   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 11:23:24.241122   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:23:24.252061   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 11:23:24.253413   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 11:23:24.253424   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 11:23:24.256097   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 11:23:24.256112   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 11:23:24.271732   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 11:23:24.271747   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 11:23:24.344571   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 11:23:24.344589   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 11:23:24.351954   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:23:24.351970   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 11:23:24.368890   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 11:23:24.368902   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 11:23:24.454221   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 11:23:24.454235   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 11:23:24.454670   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:23:24.471327   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 11:23:24.471339   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 11:23:24.485285   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 11:23:24.485299   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 11:23:24.548818   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 11:23:24.548835   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 11:23:24.644195   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:23:24.644208   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 11:23:24.675479   15352 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 11:23:24.679579   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:23:24.942852   15352 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220602111648-2113"
	I0602 11:23:25.565936   15352 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0602 11:23:25.587209   15352 addons.go:417] enableAddons completed in 1.938607775s
	I0602 11:23:26.083086   15352 pod_ready.go:102] pod "coredns-64897985d-ps5fw" in "kube-system" namespace has status "Ready":"False"
	I0602 11:23:26.585079   15352 pod_ready.go:92] pod "coredns-64897985d-ps5fw" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.585092   15352 pod_ready.go:81] duration metric: took 2.518690418s waiting for pod "coredns-64897985d-ps5fw" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.585099   15352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-zhfn8" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.589749   15352 pod_ready.go:92] pod "coredns-64897985d-zhfn8" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.589758   15352 pod_ready.go:81] duration metric: took 4.642896ms waiting for pod "coredns-64897985d-zhfn8" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.589768   15352 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.593913   15352 pod_ready.go:92] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.593921   15352 pod_ready.go:81] duration metric: took 4.149186ms waiting for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.593929   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.598343   15352 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.598352   15352 pod_ready.go:81] duration metric: took 4.418374ms waiting for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.598358   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.603237   15352 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.603246   15352 pod_ready.go:81] duration metric: took 4.883426ms waiting for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.603253   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gcmn9" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.983215   15352 pod_ready.go:92] pod "kube-proxy-gcmn9" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.983225   15352 pod_ready.go:81] duration metric: took 379.960719ms waiting for pod "kube-proxy-gcmn9" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.983235   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:27.383538   15352 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:27.383549   15352 pod_ready.go:81] duration metric: took 400.30138ms waiting for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:27.383554   15352 pod_ready.go:38] duration metric: took 3.325734057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:23:27.383567   15352 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:23:27.383609   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:23:27.397670   15352 api_server.go:71] duration metric: took 3.749073932s to wait for apiserver process to appear ...
	I0602 11:23:27.397685   15352 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:23:27.397693   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:23:27.402642   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 200:
	ok
	I0602 11:23:27.403842   15352 api_server.go:140] control plane version: v1.23.6
	I0602 11:23:27.403853   15352 api_server.go:130] duration metric: took 6.160965ms to wait for apiserver health ...
	I0602 11:23:27.403858   15352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:23:27.585352   15352 system_pods.go:59] 9 kube-system pods found
	I0602 11:23:27.585372   15352 system_pods.go:61] "coredns-64897985d-ps5fw" [dca916a9-6a4a-407e-af4d-19f98f5aa6c4] Running
	I0602 11:23:27.585400   15352 system_pods.go:61] "coredns-64897985d-zhfn8" [c17ca662-7b52-40a8-b1b1-661983c183d4] Running
	I0602 11:23:27.585405   15352 system_pods.go:61] "etcd-embed-certs-20220602111648-2113" [729f1076-c1d1-40f2-8c74-0716513f8c59] Running
	I0602 11:23:27.585411   15352 system_pods.go:61] "kube-apiserver-embed-certs-20220602111648-2113" [0e9e3a9d-e57f-48f8-a66e-d51393f9e509] Running
	I0602 11:23:27.585416   15352 system_pods.go:61] "kube-controller-manager-embed-certs-20220602111648-2113" [f15aadc2-e920-484a-bb54-c1db87cf9b51] Running
	I0602 11:23:27.585423   15352 system_pods.go:61] "kube-proxy-gcmn9" [9f001538-3e2b-455a-999c-bbb8b7ce2082] Running
	I0602 11:23:27.585430   15352 system_pods.go:61] "kube-scheduler-embed-certs-20220602111648-2113" [7d78f2d1-2fd3-4d17-a604-123c557dc94b] Running
	I0602 11:23:27.585435   15352 system_pods.go:61] "metrics-server-b955d9d8-d6jzn" [2e3f5fb8-e6aa-41f3-a689-f4ebd249a466] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:23:27.585442   15352 system_pods.go:61] "storage-provisioner" [37849889-6793-4475-a0b1-28f0412b616e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:23:27.585450   15352 system_pods.go:74] duration metric: took 181.583327ms to wait for pod list to return data ...
	I0602 11:23:27.585460   15352 default_sa.go:34] waiting for default service account to be created ...
	I0602 11:23:27.780881   15352 default_sa.go:45] found service account: "default"
	I0602 11:23:27.780894   15352 default_sa.go:55] duration metric: took 195.425402ms for default service account to be created ...
	I0602 11:23:27.780901   15352 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 11:23:27.983547   15352 system_pods.go:86] 9 kube-system pods found
	I0602 11:23:27.983561   15352 system_pods.go:89] "coredns-64897985d-ps5fw" [dca916a9-6a4a-407e-af4d-19f98f5aa6c4] Running
	I0602 11:23:27.983566   15352 system_pods.go:89] "coredns-64897985d-zhfn8" [c17ca662-7b52-40a8-b1b1-661983c183d4] Running
	I0602 11:23:27.983569   15352 system_pods.go:89] "etcd-embed-certs-20220602111648-2113" [729f1076-c1d1-40f2-8c74-0716513f8c59] Running
	I0602 11:23:27.983582   15352 system_pods.go:89] "kube-apiserver-embed-certs-20220602111648-2113" [0e9e3a9d-e57f-48f8-a66e-d51393f9e509] Running
	I0602 11:23:27.983587   15352 system_pods.go:89] "kube-controller-manager-embed-certs-20220602111648-2113" [f15aadc2-e920-484a-bb54-c1db87cf9b51] Running
	I0602 11:23:27.983591   15352 system_pods.go:89] "kube-proxy-gcmn9" [9f001538-3e2b-455a-999c-bbb8b7ce2082] Running
	I0602 11:23:27.983597   15352 system_pods.go:89] "kube-scheduler-embed-certs-20220602111648-2113" [7d78f2d1-2fd3-4d17-a604-123c557dc94b] Running
	I0602 11:23:27.983604   15352 system_pods.go:89] "metrics-server-b955d9d8-d6jzn" [2e3f5fb8-e6aa-41f3-a689-f4ebd249a466] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:23:27.983611   15352 system_pods.go:89] "storage-provisioner" [37849889-6793-4475-a0b1-28f0412b616e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:23:27.983617   15352 system_pods.go:126] duration metric: took 202.708238ms to wait for k8s-apps to be running ...
	I0602 11:23:27.983624   15352 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 11:23:27.983673   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:23:27.996729   15352 system_svc.go:56] duration metric: took 13.098129ms WaitForService to wait for kubelet.
	I0602 11:23:27.996748   15352 kubeadm.go:572] duration metric: took 4.348143989s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 11:23:27.996779   15352 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:23:28.181774   15352 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:23:28.181787   15352 node_conditions.go:123] node cpu capacity is 6
	I0602 11:23:28.181794   15352 node_conditions.go:105] duration metric: took 184.999302ms to run NodePressure ...
	I0602 11:23:28.181802   15352 start.go:213] waiting for startup goroutines ...
	I0602 11:23:28.214838   15352 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 11:23:28.236679   15352 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220602111648-2113" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:17:55 UTC, end at Thu 2022-06-02 18:24:33 UTC. --
	Jun 02 18:22:40 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:22:40.049314468Z" level=info msg="ignoring event" container=81b021e998cc6b5e80e09327a5dca58f3645dc50b68df3fd2db1ea08d075644b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:22:40 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:22:40.174549594Z" level=info msg="ignoring event" container=7485a32ee830437f4034b9a5841c31dc82d7f71ddf5c35aaa06a919026e06a1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:22:50 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:22:50.259994733Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=79fee564d7722ff5087c72648958e1864080015877a359418bb9d8c9020b4567
	Jun 02 18:22:50 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:22:50.289496287Z" level=info msg="ignoring event" container=79fee564d7722ff5087c72648958e1864080015877a359418bb9d8c9020b4567 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:22:50 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:22:50.403618462Z" level=info msg="ignoring event" container=94b7e00a9081ce5c3377a576ba713958be99d44c60e7db71eb41b852aa9b4446 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.469892549Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=52b0d2ee4254759b60d6721bf326faec9c50ab82f8a7fe842ea6790b7c2d91b1
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.527240241Z" level=info msg="ignoring event" container=52b0d2ee4254759b60d6721bf326faec9c50ab82f8a7fe842ea6790b7c2d91b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.634369574Z" level=info msg="ignoring event" container=86200dafbd9cf358dd60a97f4c239f2681e51a7a638658f45c706c59d27724f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.736103019Z" level=info msg="ignoring event" container=c3f925b9e9e7f020fd42beea83f7c7cdf8f798b7a7dfcf5c65ae9f6dc9f6a571 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.835136162Z" level=info msg="ignoring event" container=8ea0dee2c1866f6c78e5cb45742b33e53ef491b28459ed76c2dd163340972f66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.966456948Z" level=info msg="ignoring event" container=cda458dd692b609a25a006deec1dbe258c6f9cd68f11da48016684ebc55836da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:25 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:25.746073975Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:25 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:25.746119539Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:25 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:25.747186190Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:27 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:27.034820255Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:23:27 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:27.263016186Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:23:29 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:29.085990553Z" level=info msg="ignoring event" container=0f8cdd07d435a722e7d8ef64ce7658592d603989924500e4a60d524fa406adcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:29 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:29.250883510Z" level=info msg="ignoring event" container=2b0f4a27cc00293d7efbcb6fa58acc86538eb112c41cb408cdbf412a3d4f5ac6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:30 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:30.436850569Z" level=info msg="ignoring event" container=57887a908ac97354796360d9926013f2c4f4f04cb22971793db7434569d9e5ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:30 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:30.547965329Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 02 18:23:31 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:31.141596334Z" level=info msg="ignoring event" container=2de37547d73dee06293ff8604ac9806e88299e034162b0416d1da8d019557954 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:42 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:42.297489060Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:42 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:42.297535874Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:42 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:42.365599532Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:46 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:46.906289957Z" level=info msg="ignoring event" container=719c668db198a73cd1140dd0ba0b21c998dba8794cf40c0c36f4958e2301f21b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	719c668db198a       a90209bb39e3d                                                                                    47 seconds ago       Exited              dashboard-metrics-scraper   2                   50c0b7d606286
	86bec09400c89       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   51 seconds ago       Running             kubernetes-dashboard        0                   5b77ffce4ef9c
	904d4dbfb80e2       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   d7c87dc9440b3
	3cc96a0578309       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   05bcaf5db0a26
	d194d787baa35       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   47d796f4a407e
	bbe638877cc03       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   53141afdb0d3f
	6de502b252383       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   86e0ae1b95d2d
	8b190f5996d51       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   926a90d207bf2
	5ed407736824f       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   e688fa4f15aa7
	
	* 
	* ==> coredns [3cc96a057830] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220602111648-2113
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220602111648-2113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=embed-certs-20220602111648-2113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T11_23_09_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 18:23:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220602111648-2113
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 18:24:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 18:24:31 +0000   Thu, 02 Jun 2022 18:23:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 18:24:31 +0000   Thu, 02 Jun 2022 18:23:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 18:24:31 +0000   Thu, 02 Jun 2022 18:23:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 18:24:31 +0000   Thu, 02 Jun 2022 18:24:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220602111648-2113
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                19795094-b018-4c71-93fc-a30c871a2c0a
	  Boot ID:                    a475dd08-72ba-4c6d-89c1-75a58adc3783
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-ps5fw                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     71s
	  kube-system                 etcd-embed-certs-20220602111648-2113                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-embed-certs-20220602111648-2113             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-embed-certs-20220602111648-2113    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-proxy-gcmn9                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-scheduler-embed-certs-20220602111648-2113             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 metrics-server-b955d9d8-d6jzn                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-hf9nn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-gg4gx                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 70s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    90s (x4 over 90s)  kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x4 over 90s)  kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  90s (x4 over 90s)  kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientMemory
	  Normal  Starting                 84s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s                kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s                kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s                kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                73s                kubelet     Node embed-certs-20220602111648-2113 status is now: NodeReady
	  Normal  Starting                 2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [6de502b25238] <==
	* {"level":"info","ts":"2022-06-02T18:23:04.784Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T18:23:04.784Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T18:23:04.784Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:23:04.784Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220602111648-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:23:04.978Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T18:23:04.978Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:23:04.979Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:23:04.979Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:23:04.979Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2022-06-02T18:23:28.344Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.691637ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T18:23:28.344Z","caller":"traceutil/trace.go:171","msg":"trace[1143572392] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:585; }","duration":"102.843372ms","start":"2022-06-02T18:23:28.241Z","end":"2022-06-02T18:23:28.344Z","steps":["trace[1143572392] 'range keys from in-memory index tree'  (duration: 102.640117ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T18:23:41.911Z","caller":"traceutil/trace.go:171","msg":"trace[1951724529] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"102.515767ms","start":"2022-06-02T18:23:41.808Z","end":"2022-06-02T18:23:41.911Z","steps":["trace[1951724529] 'process raft request'  (duration: 101.291674ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:24:34 up  1:12,  0 users,  load average: 0.52, 0.60, 0.87
	Linux embed-certs-20220602111648-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [8b190f5996d5] <==
	* I0602 18:23:08.215088       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 18:23:08.240642       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 18:23:08.285171       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0602 18:23:08.288731       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0602 18:23:08.289452       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 18:23:08.292009       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 18:23:09.081645       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 18:23:09.586876       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 18:23:09.593677       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0602 18:23:09.602213       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 18:23:09.775161       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 18:23:21.988418       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 18:23:22.839571       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 18:23:23.565594       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 18:23:24.901439       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.192.133]
	I0602 18:23:25.493503       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.211.224]
	I0602 18:23:25.506582       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.42.109]
	W0602 18:23:25.766488       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:23:25.766609       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:23:25.766636       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0602 18:24:30.422135       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:24:30.422234       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:24:30.422241       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [bbe638877cc0] <==
	* I0602 18:23:24.783742       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 18:23:24.787727       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0602 18:23:24.791820       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0602 18:23:24.795292       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-d6jzn"
	I0602 18:23:25.405484       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 18:23:25.409809       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:23:25.413265       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0602 18:23:25.417074       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:23:25.417106       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:23:25.417541       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	I0602 18:23:25.420518       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:23:25.423825       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0602 18:23:25.423880       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:23:25.423841       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:23:25.430356       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:23:25.430539       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:23:25.433334       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:23:25.433383       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:23:25.462068       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-gg4gx"
	I0602 18:23:25.465557       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-hf9nn"
	W0602 18:23:30.026329       1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0602 18:23:52.033611       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 18:23:52.545393       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0602 18:24:30.635858       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 18:24:30.704345       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d194d787baa3] <==
	* I0602 18:23:23.495206       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:23:23.495261       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:23:23.495305       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:23:23.517209       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:23:23.517259       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:23:23.517267       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:23:23.517300       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:23:23.517632       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:23:23.561288       1 config.go:317] "Starting service config controller"
	I0602 18:23:23.561990       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:23:23.561482       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:23:23.562008       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:23:23.662567       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 18:23:23.662608       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5ed407736824] <==
	* W0602 18:23:06.983759       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:06.983768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 18:23:06.983851       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:06.983879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 18:23:06.983982       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 18:23:06.984010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 18:23:06.984371       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 18:23:06.984380       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0602 18:23:06.984429       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:23:06.984488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:23:06.984631       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:06.984640       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 18:23:06.984670       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:23:06.984795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:23:07.827418       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 18:23:07.827458       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 18:23:07.897392       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:07.897429       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 18:23:08.044637       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:23:08.044674       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:23:08.051488       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 18:23:08.051526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0602 18:23:08.103626       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:08.103665       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0602 18:23:10.479922       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:17:55 UTC, end at Thu 2022-06-02 18:24:34 UTC. --
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.222890    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2e3f5fb8-e6aa-41f3-a689-f4ebd249a466-tmp-dir\") pod \"metrics-server-b955d9d8-d6jzn\" (UID: \"2e3f5fb8-e6aa-41f3-a689-f4ebd249a466\") " pod="kube-system/metrics-server-b955d9d8-d6jzn"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.222982    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8gwj\" (UniqueName: \"kubernetes.io/projected/434b02ce-7d60-4db2-979f-96ada075b5f6-kube-api-access-g8gwj\") pod \"dashboard-metrics-scraper-56974995fc-hf9nn\" (UID: \"434b02ce-7d60-4db2-979f-96ada075b5f6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-hf9nn"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223009    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f001538-3e2b-455a-999c-bbb8b7ce2082-kube-proxy\") pod \"kube-proxy-gcmn9\" (UID: \"9f001538-3e2b-455a-999c-bbb8b7ce2082\") " pod="kube-system/kube-proxy-gcmn9"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223024    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dca916a9-6a4a-407e-af4d-19f98f5aa6c4-config-volume\") pod \"coredns-64897985d-ps5fw\" (UID: \"dca916a9-6a4a-407e-af4d-19f98f5aa6c4\") " pod="kube-system/coredns-64897985d-ps5fw"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223041    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgw4\" (UniqueName: \"kubernetes.io/projected/2e3f5fb8-e6aa-41f3-a689-f4ebd249a466-kube-api-access-bfgw4\") pod \"metrics-server-b955d9d8-d6jzn\" (UID: \"2e3f5fb8-e6aa-41f3-a689-f4ebd249a466\") " pod="kube-system/metrics-server-b955d9d8-d6jzn"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223057    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/37849889-6793-4475-a0b1-28f0412b616e-tmp\") pod \"storage-provisioner\" (UID: \"37849889-6793-4475-a0b1-28f0412b616e\") " pod="kube-system/storage-provisioner"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223101    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkcbr\" (UniqueName: \"kubernetes.io/projected/a8426b52-aeeb-4e11-8366-7cbf31b79047-kube-api-access-lkcbr\") pod \"kubernetes-dashboard-cd7c84bfc-gg4gx\" (UID: \"a8426b52-aeeb-4e11-8366-7cbf31b79047\") " pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-gg4gx"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223118    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/434b02ce-7d60-4db2-979f-96ada075b5f6-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-hf9nn\" (UID: \"434b02ce-7d60-4db2-979f-96ada075b5f6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-hf9nn"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223169    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f001538-3e2b-455a-999c-bbb8b7ce2082-xtables-lock\") pod \"kube-proxy-gcmn9\" (UID: \"9f001538-3e2b-455a-999c-bbb8b7ce2082\") " pod="kube-system/kube-proxy-gcmn9"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223235    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f001538-3e2b-455a-999c-bbb8b7ce2082-lib-modules\") pod \"kube-proxy-gcmn9\" (UID: \"9f001538-3e2b-455a-999c-bbb8b7ce2082\") " pod="kube-system/kube-proxy-gcmn9"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223257    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8bqf\" (UniqueName: \"kubernetes.io/projected/dca916a9-6a4a-407e-af4d-19f98f5aa6c4-kube-api-access-h8bqf\") pod \"coredns-64897985d-ps5fw\" (UID: \"dca916a9-6a4a-407e-af4d-19f98f5aa6c4\") " pod="kube-system/coredns-64897985d-ps5fw"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223286    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2lw2\" (UniqueName: \"kubernetes.io/projected/37849889-6793-4475-a0b1-28f0412b616e-kube-api-access-d2lw2\") pod \"storage-provisioner\" (UID: \"37849889-6793-4475-a0b1-28f0412b616e\") " pod="kube-system/storage-provisioner"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223309    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a8426b52-aeeb-4e11-8366-7cbf31b79047-tmp-volume\") pod \"kubernetes-dashboard-cd7c84bfc-gg4gx\" (UID: \"a8426b52-aeeb-4e11-8366-7cbf31b79047\") " pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-gg4gx"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223342    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6wpj\" (UniqueName: \"kubernetes.io/projected/9f001538-3e2b-455a-999c-bbb8b7ce2082-kube-api-access-n6wpj\") pod \"kube-proxy-gcmn9\" (UID: \"9f001538-3e2b-455a-999c-bbb8b7ce2082\") " pod="kube-system/kube-proxy-gcmn9"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223353    7185 reconciler.go:157] "Reconciler: start to sync state"
	Jun 02 18:24:33 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:33.397821    7185 request.go:665] Waited for 1.194873126s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 02 18:24:33 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:33.470016    7185 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220602111648-2113\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220602111648-2113"
	Jun 02 18:24:33 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:33.685527    7185 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220602111648-2113\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220602111648-2113"
	Jun 02 18:24:33 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:33.853439    7185 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220602111648-2113\" already exists" pod="kube-system/etcd-embed-certs-20220602111648-2113"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.001587    7185 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220602111648-2113\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220602111648-2113"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.328275    7185 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.328304    7185 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.328384    7185 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bfgw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMess
agePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-d6jzn_kube-system(2e3f5fb8-e6aa-41f3-a689-f4ebd249a466): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.328409    7185 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-d6jzn" podUID=2e3f5fb8-e6aa-41f3-a689-f4ebd249a466
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:34.601453    7185 scope.go:110] "RemoveContainer" containerID="719c668db198a73cd1140dd0ba0b21c998dba8794cf40c0c36f4958e2301f21b"
	
	* 
	* ==> kubernetes-dashboard [86bec09400c8] <==
	* 2022/06/02 18:23:42 Using namespace: kubernetes-dashboard
	2022/06/02 18:23:42 Using in-cluster config to connect to apiserver
	2022/06/02 18:23:42 Using secret token for csrf signing
	2022/06/02 18:23:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/02 18:23:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/02 18:23:42 Successful initial request to the apiserver, version: v1.23.6
	2022/06/02 18:23:42 Generating JWE encryption key
	2022/06/02 18:23:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/02 18:23:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/02 18:23:42 Initializing JWE encryption key from synchronized object
	2022/06/02 18:23:42 Creating in-cluster Sidecar client
	2022/06/02 18:23:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:23:42 Serving insecurely on HTTP port: 9090
	2022/06/02 18:24:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:23:42 Starting overwatch
	
	* 
	* ==> storage-provisioner [904d4dbfb80e] <==
	* I0602 18:23:25.938956       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:23:25.946968       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:23:25.947068       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:23:25.952642       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:23:25.952799       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220602111648-2113_635a4672-3cc6-4467-9beb-e2412c23cc74!
	I0602 18:23:25.952850       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"837d08af-65b6-4fa1-bdba-c1b746bcd758", APIVersion:"v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220602111648-2113_635a4672-3cc6-4467-9beb-e2412c23cc74 became leader
	I0602 18:23:26.053131       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220602111648-2113_635a4672-3cc6-4467-9beb-e2412c23cc74!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220602111648-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-d6jzn
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220602111648-2113 describe pod metrics-server-b955d9d8-d6jzn
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220602111648-2113 describe pod metrics-server-b955d9d8-d6jzn: exit status 1 (321.961196ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-d6jzn" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220602111648-2113 describe pod metrics-server-b955d9d8-d6jzn: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220602111648-2113
helpers_test.go:235: (dbg) docker inspect embed-certs-20220602111648-2113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc",
	        "Created": "2022-06-02T18:16:55.180494539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T18:17:55.590678725Z",
	            "FinishedAt": "2022-06-02T18:17:53.678894097Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc/hostname",
	        "HostsPath": "/var/lib/docker/containers/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc/hosts",
	        "LogPath": "/var/lib/docker/containers/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc/82b9747ec857b93ad9d421afe7dfdd9bdf9506aef6ec3c3632152e4907e54cdc-json.log",
	        "Name": "/embed-certs-20220602111648-2113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220602111648-2113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220602111648-2113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/56c0a32a20b715226dc33286682566f4d50b9dbbd31b44aa22b7e47f39f8584e-init/diff:/var/lib/docker/overlay2/4dd335cb9793ead27105882a9b0cec3be858c11ad5caacc03a687414f6c0c659/diff:/var/lib/docker/overlay2/208c0db52d838ede59b38c1dfcd9869c8416b16d2b20ea18d0db9b56e68c6d8c/diff:/var/lib/docker/overlay2/aaf8a8f5c85270a99462f3864bf34a8ec2645724773bad697fc5ba1ac6727447/diff:/var/lib/docker/overlay2/92c4e6486e99c8dd04746740d3ea02da94dcea2781382127f34d776cfa9840e8/diff:/var/lib/docker/overlay2/a24935153f6f383a46b5fbdf2f1386f437557240473c1aea5ffb49825e122d5c/diff:/var/lib/docker/overlay2/bfac58d5f7c21d55277e22e8fe2c8361d0b42b6bc4f781d081f18506c696cbd5/diff:/var/lib/docker/overlay2/5436272aadac28e12f17d1950511088cbcbf1f121732bf67bc2b4f8bd061220e/diff:/var/lib/docker/overlay2/5e6fbb75323de9a4ebe4c26de164ba9f90e6b97a9464ae908ab8ccaa8af935a0/diff:/var/lib/docker/overlay2/9c4318b0f0aaa4384a765d2577b339424213c510ca7db4ca46d652065315fd42/diff:/var/lib/docker/overlay2/44a076
f840788b1d4cdf51e6cfa981c28e7f691ae02ca0bc198afce5b00335dd/diff:/var/lib/docker/overlay2/e00db7f66bb6cb1dd1cc97f258fea69bcfeb57eaf41f341510452732089a149c/diff:/var/lib/docker/overlay2/621ae16facab19ab30885a152e88b1331c8f767e00bfc66bba2ca3646b8848ed/diff:/var/lib/docker/overlay2/049d26daf267a8697501b45a3dc7a811f1e14cf9aac5a7954be8104dce849190/diff:/var/lib/docker/overlay2/b767958f319e787669ca25b03021756f2c0e799de75405dac116015d98cb4a05/diff:/var/lib/docker/overlay2/aa5a7b8aba1489f7637e9289e5976c3c2032670a220c77b848bae54162a48ab5/diff:/var/lib/docker/overlay2/9bf0308979693ad8ec467df0960ab7dfe4bb371271ccfc062749a559afdca0ca/diff:/var/lib/docker/overlay2/d9871cf29c5aa8c83ab462cc8a7ae8b640cb879c166a5340bc5589182c692d6c/diff:/var/lib/docker/overlay2/d1ba5717745cdc1ac785264731dcd1598f2b196430fd2be8547ba3e50442940b/diff:/var/lib/docker/overlay2/7983b4fa120a8708510aaec4a8ad6b5089e2801c37e77fa6a2184f32c793e728/diff:/var/lib/docker/overlay2/e0bb0ad6032280e9bff8c706336d61df9ba99527201708fbc53e5c9aacd500d2/diff:/var/lib/d
ocker/overlay2/842231e7ba6a5edc281dbd9ea3dfd4cc27e965aff29e690744d31381e9a71afa/diff:/var/lib/docker/overlay2/b276fe80b6a5fbc6c5c9de02831f6c5f2fbd6f99da192a7a3a2f4d154cc44e97/diff:/var/lib/docker/overlay2/014aa21763c8dccb55dd250c4d8b33f0acaee666211ead19cb6e5e28e9bc8714/diff:/var/lib/docker/overlay2/f7dddd0317e202dc9d3ca53f666678345918d26c680496881c12003c632b717e/diff:/var/lib/docker/overlay2/dbe6fb5e3e2176459f26f3be087ccb3bbf7b9f3dd8212f109cbd40db13920e61/diff:/var/lib/docker/overlay2/991e50fb7f577e1ddfa43b71c3336d9b3030af2bf50d778fa03f523d50326a26/diff:/var/lib/docker/overlay2/340a74d3ac0058298e108bb3badbdf8f9c03d12f33a8f35ace6f2dafbfef6e1b/diff:/var/lib/docker/overlay2/1ec45c8b805fa2d9ae2a78232451a8a9f7890572b65b93c3cc2f8cc97bb468b3/diff:/var/lib/docker/overlay2/a4bdf469875625a4819ef172238245456c4fbdff8d53d2e4b10c1e186b87c7e3/diff:/var/lib/docker/overlay2/971a6afffbae7a0960e3cec75ef8bf5bdeeaf93eed0625ce03d41997a1b3adf6/diff:/var/lib/docker/overlay2/41debf1920c66a8d299a760a9542d53a8f225ee5ac130b3ac7bbffb5009
7d8d5/diff:/var/lib/docker/overlay2/f35ffb9e867d47d1ccec9ff00f20991ff977a94e6bac0a2616ea9167f3577b29/diff:/var/lib/docker/overlay2/ecdbcd5cc7a31638f8aa79589398e0cf24199dc41b89b5f31b1317c3fd54820b/diff:/var/lib/docker/overlay2/b66e4f99691657f24a54217d3c53ad994286af23e381935732b9c3f2d21f4a44/diff:/var/lib/docker/overlay2/ec5368fd95421da6dabd09af51a761c3235ecc971aca85e8ddaaf02df2d11c79/diff:/var/lib/docker/overlay2/93178712be4ea745873bf53ef4ef2b20986cd1279859a0eacbed679e51311319/diff:/var/lib/docker/overlay2/e33f9b16e3c7d44079562141307279c286bd308d341351990313fa5012f277be/diff:/var/lib/docker/overlay2/8c433930f49d5c9feb22ddb9ced5b25cbb0a4e69904034409467c13f88e2c022/diff:/var/lib/docker/overlay2/cd43f3c8f5a0f533414220f90bc387d734a11743cd1bd8c1be179bf039ae713a/diff:/var/lib/docker/overlay2/700358b38076f573c0b16cdffa046181ab1220d64f5b2392183b17a048a9d77b/diff:/var/lib/docker/overlay2/4e44a564d5dc52a3da062061a9f27f56a5ed8dd4f266b30b4b09c9cc0aeb1e1e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56c0a32a20b715226dc33286682566f4d50b9dbbd31b44aa22b7e47f39f8584e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56c0a32a20b715226dc33286682566f4d50b9dbbd31b44aa22b7e47f39f8584e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56c0a32a20b715226dc33286682566f4d50b9dbbd31b44aa22b7e47f39f8584e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220602111648-2113",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220602111648-2113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220602111648-2113",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220602111648-2113",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220602111648-2113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c13705557d1f6cadd9af527c7a6ad6f4165ee0fc8b7c3fb7ca9a32dc1edfd3c1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54890"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54891"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54893"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54894"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c13705557d1f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220602111648-2113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "82b9747ec857",
	                        "embed-certs-20220602111648-2113"
	                    ],
	                    "NetworkID": "7fc7fa81ba697d96b69d01d51b7eeadbfdb988accd570d0531903136042ab048",
	                    "EndpointID": "ebabd5ebeebd48a69586e249f7afcb29191d268e378157ab0064386cc61033d0",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220602111648-2113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220602111648-2113 logs -n 25: (2.827901562s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | default-k8s-different-port-20220602110711-2113             | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602110711-2113 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:14 PDT |
	|         | default-k8s-different-port-20220602110711-2113             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:14 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220602111446-2113 --memory=2200            | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:15 PDT | 02 Jun 22 11:15 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220602111446-2113                             | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220602111446-2113                             | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220602111446-2113                 | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:16 PDT |
	|         | newest-cni-20220602111446-2113                             |                                                |         |                |                     |                     |
	| start   | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:16 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:17 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220602105906-2113                        | old-k8s-version-20220602105906-2113            | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:22 PDT | 02 Jun 22 11:22 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:17 PDT | 02 Jun 22 11:23 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:23 PDT | 02 Jun 22 11:23 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:23 PDT | 02 Jun 22 11:23 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:24 PDT | 02 Jun 22 11:24 PDT |
	|         | embed-certs-20220602111648-2113                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220602111648-2113                            | embed-certs-20220602111648-2113                | jenkins | v1.26.0-beta.1 | 02 Jun 22 11:24 PDT | 02 Jun 22 11:24 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 11:17:54
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 11:17:54.298706   15352 out.go:296] Setting OutFile to fd 1 ...
	I0602 11:17:54.298896   15352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:17:54.298901   15352 out.go:309] Setting ErrFile to fd 2...
	I0602 11:17:54.298905   15352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 11:17:54.299002   15352 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 11:17:54.299282   15352 out.go:303] Setting JSON to false
	I0602 11:17:54.314716   15352 start.go:115] hostinfo: {"hostname":"37309.local","uptime":4643,"bootTime":1654189231,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 11:17:54.314829   15352 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 11:17:54.336522   15352 out.go:177] * [embed-certs-20220602111648-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 11:17:54.379858   15352 notify.go:193] Checking for updates...
	I0602 11:17:54.401338   15352 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 11:17:54.422430   15352 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:17:54.443822   15352 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 11:17:54.465706   15352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 11:17:54.487842   15352 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 11:17:54.510345   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:17:54.511006   15352 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 11:17:54.583879   15352 docker.go:137] docker version: linux-20.10.14
	I0602 11:17:54.584008   15352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:17:54.710496   15352 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:17:54.661726472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:17:54.732441   15352 out.go:177] * Using the docker driver based on existing profile
	I0602 11:17:54.754261   15352 start.go:284] selected driver: docker
	I0602 11:17:54.754294   15352 start.go:806] validating driver "docker" against &{Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:54.754438   15352 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 11:17:54.757822   15352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 11:17:54.886547   15352 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:54 SystemTime:2022-06-02 18:17:54.836693909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 11:17:54.886708   15352 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 11:17:54.886725   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:17:54.886733   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:17:54.886755   15352 start_flags.go:306] config:
	{Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:54.930397   15352 out.go:177] * Starting control plane node embed-certs-20220602111648-2113 in cluster embed-certs-20220602111648-2113
	I0602 11:17:54.952534   15352 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 11:17:54.974462   15352 out.go:177] * Pulling base image ...
	I0602 11:17:55.016639   15352 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:17:55.016641   15352 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 11:17:55.016722   15352 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 11:17:55.016736   15352 cache.go:57] Caching tarball of preloaded images
	I0602 11:17:55.016927   15352 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 11:17:55.016959   15352 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 11:17:55.017969   15352 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/config.json ...
	I0602 11:17:55.082071   15352 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 11:17:55.082088   15352 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 11:17:55.082098   15352 cache.go:206] Successfully downloaded all kic artifacts
	I0602 11:17:55.082139   15352 start.go:352] acquiring machines lock for embed-certs-20220602111648-2113: {Name:mk14ff68897b305c2bdfb36f1ceaa58ce32379a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 11:17:55.082233   15352 start.go:356] acquired machines lock for "embed-certs-20220602111648-2113" in 73.195µs
	I0602 11:17:55.082254   15352 start.go:94] Skipping create...Using existing machine configuration
	I0602 11:17:55.082263   15352 fix.go:55] fixHost starting: 
	I0602 11:17:55.082507   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:17:55.149317   15352 fix.go:103] recreateIfNeeded on embed-certs-20220602111648-2113: state=Stopped err=<nil>
	W0602 11:17:55.149352   15352 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 11:17:55.192959   15352 out.go:177] * Restarting existing docker container for "embed-certs-20220602111648-2113" ...
	I0602 11:17:55.214224   15352 cli_runner.go:164] Run: docker start embed-certs-20220602111648-2113
	I0602 11:17:55.579016   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:17:55.651976   15352 kic.go:416] container "embed-certs-20220602111648-2113" state is running.
	I0602 11:17:55.652516   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:55.726686   15352 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/config.json ...
	I0602 11:17:55.727067   15352 machine.go:88] provisioning docker machine ...
	I0602 11:17:55.727092   15352 ubuntu.go:169] provisioning hostname "embed-certs-20220602111648-2113"
	I0602 11:17:55.727154   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:55.800251   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:55.800475   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:55.800489   15352 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220602111648-2113 && echo "embed-certs-20220602111648-2113" | sudo tee /etc/hostname
	I0602 11:17:55.940753   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220602111648-2113
	
	I0602 11:17:55.940849   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.013703   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.013881   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.013895   15352 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220602111648-2113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220602111648-2113/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220602111648-2113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 11:17:56.130458   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:17:56.130490   15352 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 11:17:56.130508   15352 ubuntu.go:177] setting up certificates
	I0602 11:17:56.130518   15352 provision.go:83] configureAuth start
	I0602 11:17:56.130590   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:56.202522   15352 provision.go:138] copyHostCerts
	I0602 11:17:56.202610   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 11:17:56.202620   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 11:17:56.202707   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 11:17:56.202956   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 11:17:56.202966   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 11:17:56.203024   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 11:17:56.203210   15352 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 11:17:56.203230   15352 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 11:17:56.203292   15352 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1675 bytes)
	I0602 11:17:56.203402   15352 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220602111648-2113 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220602111648-2113]
	I0602 11:17:56.290352   15352 provision.go:172] copyRemoteCerts
	I0602 11:17:56.290417   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 11:17:56.290462   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.363098   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:56.448844   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 11:17:56.468413   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 11:17:56.487167   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0602 11:17:56.504244   15352 provision.go:86] duration metric: configureAuth took 373.70854ms
	I0602 11:17:56.504257   15352 ubuntu.go:193] setting minikube options for container-runtime
	I0602 11:17:56.504400   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:17:56.504454   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.574726   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.574873   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.574883   15352 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 11:17:56.692552   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 11:17:56.692565   15352 ubuntu.go:71] root file system type: overlay
	I0602 11:17:56.692719   15352 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 11:17:56.692794   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.763208   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.763366   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.763424   15352 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 11:17:56.888442   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 11:17:56.888522   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:56.959173   15352 main.go:134] libmachine: Using SSH client type: native
	I0602 11:17:56.959343   15352 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54890 <nil> <nil>}
	I0602 11:17:56.959378   15352 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 11:17:57.080070   15352 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 11:17:57.080081   15352 machine.go:91] provisioned docker machine in 1.352983871s
	I0602 11:17:57.080092   15352 start.go:306] post-start starting for "embed-certs-20220602111648-2113" (driver="docker")
	I0602 11:17:57.080099   15352 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 11:17:57.080167   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 11:17:57.080224   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.150320   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.237169   15352 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 11:17:57.240932   15352 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 11:17:57.240947   15352 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 11:17:57.240960   15352 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 11:17:57.240965   15352 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 11:17:57.240973   15352 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 11:17:57.241075   15352 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 11:17:57.241205   15352 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem -> 21132.pem in /etc/ssl/certs
	I0602 11:17:57.241347   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 11:17:57.249423   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:17:57.266686   15352 start.go:309] post-start completed in 186.579963ms
	I0602 11:17:57.266764   15352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 11:17:57.266809   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.337389   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.419423   15352 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 11:17:57.423756   15352 fix.go:57] fixHost completed within 2.341450978s
	I0602 11:17:57.423771   15352 start.go:81] releasing machines lock for "embed-certs-20220602111648-2113", held for 2.341488916s
	I0602 11:17:57.423846   15352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220602111648-2113
	I0602 11:17:57.493832   15352 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 11:17:57.493842   15352 ssh_runner.go:195] Run: systemctl --version
	I0602 11:17:57.493909   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.493898   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:57.571385   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.572948   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:17:57.784521   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 11:17:57.797372   15352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:17:57.806989   15352 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 11:17:57.807041   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 11:17:57.816005   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 11:17:57.829060   15352 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 11:17:57.898903   15352 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 11:17:57.967953   15352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 11:17:57.977779   15352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 11:17:58.050651   15352 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 11:17:58.060254   15352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:17:58.095467   15352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 11:17:58.172409   15352 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 11:17:58.172543   15352 cli_runner.go:164] Run: docker exec -t embed-certs-20220602111648-2113 dig +short host.docker.internal
	I0602 11:17:58.301503   15352 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 11:17:58.301604   15352 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 11:17:58.305905   15352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:17:58.316714   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:58.387831   15352 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 11:17:58.387911   15352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:17:58.416852   15352 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:17:58.416866   15352 docker.go:541] Images already preloaded, skipping extraction
	I0602 11:17:58.416944   15352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 11:17:58.447690   15352 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0602 11:17:58.447713   15352 cache_images.go:84] Images are preloaded, skipping loading
	I0602 11:17:58.447820   15352 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 11:17:58.520455   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:17:58.520468   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:17:58.520483   15352 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 11:17:58.520502   15352 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220602111648-2113 NodeName:embed-certs-20220602111648-2113 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 11:17:58.520613   15352 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220602111648-2113"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 11:17:58.520681   15352 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220602111648-2113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 11:17:58.520742   15352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 11:17:58.528337   15352 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 11:17:58.528400   15352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 11:17:58.535248   15352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0602 11:17:58.547429   15352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 11:17:58.559653   15352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0602 11:17:58.572912   15352 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 11:17:58.576677   15352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 11:17:58.585837   15352 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113 for IP: 192.168.58.2
	I0602 11:17:58.585959   15352 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 11:17:58.586013   15352 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 11:17:58.586093   15352 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/client.key
	I0602 11:17:58.586153   15352 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.key.cee25041
	I0602 11:17:58.586215   15352 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.key
	I0602 11:17:58.586412   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem (1338 bytes)
	W0602 11:17:58.586453   15352 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113_empty.pem, impossibly tiny 0 bytes
	I0602 11:17:58.586477   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 11:17:58.586519   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 11:17:58.586551   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 11:17:58.586580   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1675 bytes)
	I0602 11:17:58.586639   15352 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem (1708 bytes)
	I0602 11:17:58.587181   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 11:17:58.604132   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 11:17:58.620640   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 11:17:58.637561   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/embed-certs-20220602111648-2113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 11:17:58.654357   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 11:17:58.671422   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0602 11:17:58.687905   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 11:17:58.704559   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0602 11:17:58.721152   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/2113.pem --> /usr/share/ca-certificates/2113.pem (1338 bytes)
	I0602 11:17:58.738095   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/21132.pem --> /usr/share/ca-certificates/21132.pem (1708 bytes)
	I0602 11:17:58.754705   15352 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 11:17:58.771067   15352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 11:17:58.783467   15352 ssh_runner.go:195] Run: openssl version
	I0602 11:17:58.788645   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21132.pem && ln -fs /usr/share/ca-certificates/21132.pem /etc/ssl/certs/21132.pem"
	I0602 11:17:58.796302   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.800112   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:16 /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.800156   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21132.pem
	I0602 11:17:58.805418   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21132.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 11:17:58.812620   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 11:17:58.820133   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.824238   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.824280   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 11:17:58.829346   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 11:17:58.836768   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2113.pem && ln -fs /usr/share/ca-certificates/2113.pem /etc/ssl/certs/2113.pem"
	I0602 11:17:58.844364   15352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.848158   15352 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:16 /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.848204   15352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2113.pem
	I0602 11:17:58.853444   15352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2113.pem /etc/ssl/certs/51391683.0"
	I0602 11:17:58.860527   15352 kubeadm.go:395] StartCluster: {Name:embed-certs-20220602111648-2113 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220602111648-2113 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 11:17:58.860620   15352 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:17:58.889454   15352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 11:17:58.897140   15352 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 11:17:58.897153   15352 kubeadm.go:626] restartCluster start
	I0602 11:17:58.897196   15352 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 11:17:58.903854   15352 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:58.903907   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:17:58.974750   15352 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220602111648-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:17:58.975016   15352 kubeconfig.go:127] "embed-certs-20220602111648-2113" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig - will repair!
	I0602 11:17:58.975368   15352 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:17:58.976710   15352 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 11:17:58.984402   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:58.984445   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:58.992514   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.194646   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.194824   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.205800   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.394596   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.394711   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.404574   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.592620   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.592742   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.603566   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.792706   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.792789   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:17:59.801888   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:17:59.992644   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:17:59.992738   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.004887   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.194652   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.194785   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.205062   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.394638   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.394783   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.405305   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.593032   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.593156   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.602450   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.793140   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.793270   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:00.803822   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:00.992792   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:00.992919   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.003646   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.194714   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.194891   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.206158   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.393563   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.393610   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.402165   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.593865   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.593962   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.604645   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.794719   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.794882   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:01.806019   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:01.993241   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:01.993427   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:02.004637   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.004647   15352 api_server.go:165] Checking apiserver status ...
	I0602 11:18:02.004690   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 11:18:02.012637   15352 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.012650   15352 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 11:18:02.012657   15352 kubeadm.go:1092] stopping kube-system containers ...
	I0602 11:18:02.012720   15352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 11:18:02.043235   15352 docker.go:442] Stopping containers: [6b1ddf58ceb9 2443900e874e db141163e6d4 0356cd90224b f1d263c9b0f1 14883f2e0c47 2b1660b40df3 2259cd9108be 1277daa5a30b 8f0298e2ec89 9fa8e7282212 4f92dc954d61 bbe61b313255 6db85ab616c7 703d34253678 d3aedabaf004]
	I0602 11:18:02.043308   15352 ssh_runner.go:195] Run: docker stop 6b1ddf58ceb9 2443900e874e db141163e6d4 0356cd90224b f1d263c9b0f1 14883f2e0c47 2b1660b40df3 2259cd9108be 1277daa5a30b 8f0298e2ec89 9fa8e7282212 4f92dc954d61 bbe61b313255 6db85ab616c7 703d34253678 d3aedabaf004
	I0602 11:18:02.073833   15352 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 11:18:02.087788   15352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:18:02.095874   15352 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  2 18:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  2 18:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  2 18:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  2 18:17 /etc/kubernetes/scheduler.conf
	
	I0602 11:18:02.095938   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 11:18:02.103319   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 11:18:02.110716   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 11:18:02.117486   15352 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.117534   15352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 11:18:02.124006   15352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 11:18:02.130595   15352 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 11:18:02.130640   15352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 11:18:02.137026   15352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:18:02.143920   15352 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 11:18:02.143937   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:02.186111   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:02.940146   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.065256   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.113758   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:03.165838   15352 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:18:03.165901   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:03.677915   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:04.176018   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:18:04.191155   15352 api_server.go:71] duration metric: took 1.025302471s to wait for apiserver process to appear ...
	I0602 11:18:04.191173   15352 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:18:04.191182   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:04.192377   15352 api_server.go:256] stopped: https://127.0.0.1:54894/healthz: Get "https://127.0.0.1:54894/healthz": EOF
	I0602 11:18:04.693127   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.094069   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 11:18:07.094108   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 11:18:07.193195   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.202009   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:07.202029   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:07.693364   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:07.700473   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:07.700494   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:08.192616   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:08.197675   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 11:18:08.197689   15352 api_server.go:102] status: https://127.0.0.1:54894/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 11:18:08.692589   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:18:08.697963   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 200:
	ok
	I0602 11:18:08.704402   15352 api_server.go:140] control plane version: v1.23.6
	I0602 11:18:08.704415   15352 api_server.go:130] duration metric: took 4.513159523s to wait for apiserver health ...
	I0602 11:18:08.704422   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:18:08.704427   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:18:08.704436   15352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:18:08.712420   15352 system_pods.go:59] 8 kube-system pods found
	I0602 11:18:08.712443   15352 system_pods.go:61] "coredns-64897985d-mqhps" [a9db0af0-c7e2-43f0-94d1-285cf82eefc6] Running
	I0602 11:18:08.712450   15352 system_pods.go:61] "etcd-embed-certs-20220602111648-2113" [655c91b8-a19a-4a3d-8fc4-4bb99628728c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 11:18:08.712457   15352 system_pods.go:61] "kube-apiserver-embed-certs-20220602111648-2113" [1c169e07-9698-455b-bc45-fb6268c818dd] Running
	I0602 11:18:08.712463   15352 system_pods.go:61] "kube-controller-manager-embed-certs-20220602111648-2113" [8dabcc9b-0bff-45c0-b617-b673244bb05e] Running
	I0602 11:18:08.712467   15352 system_pods.go:61] "kube-proxy-hxhmn" [0b00b834-77d9-498a-b6f4-73ada68667be] Running
	I0602 11:18:08.712471   15352 system_pods.go:61] "kube-scheduler-embed-certs-20220602111648-2113" [2d987b9c-0f04-4851-bdb4-d9d1eefcc598] Running
	I0602 11:18:08.712481   15352 system_pods.go:61] "metrics-server-b955d9d8-5k65t" [27770582-e78d-4495-83a5-a03c3c22b6ed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:18:08.712489   15352 system_pods.go:61] "storage-provisioner" [971f85e7-9555-4ad3-aada-015be49207a6] Running
	I0602 11:18:08.712494   15352 system_pods.go:74] duration metric: took 8.053604ms to wait for pod list to return data ...
	I0602 11:18:08.712501   15352 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:18:08.718457   15352 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:18:08.718474   15352 node_conditions.go:123] node cpu capacity is 6
	I0602 11:18:08.718485   15352 node_conditions.go:105] duration metric: took 5.979977ms to run NodePressure ...
	I0602 11:18:08.718498   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 11:18:08.917133   15352 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0602 11:18:08.963399   15352 kubeadm.go:777] kubelet initialised
	I0602 11:18:08.963410   15352 kubeadm.go:778] duration metric: took 46.263216ms waiting for restarted kubelet to initialise ...
	I0602 11:18:08.963418   15352 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:18:08.968510   15352 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-mqhps" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:08.973930   15352 pod_ready.go:92] pod "coredns-64897985d-mqhps" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:08.973941   15352 pod_ready.go:81] duration metric: took 5.418497ms waiting for pod "coredns-64897985d-mqhps" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:08.973947   15352 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:10.987864   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:13.489319   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:15.984994   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:17.985135   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:20.487923   15352 pod_ready.go:102] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:20.984961   15352 pod_ready.go:92] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:20.984975   15352 pod_ready.go:81] duration metric: took 12.010814852s waiting for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:20.984981   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:22.996747   15352 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:23.497076   15352 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.497088   15352 pod_ready.go:81] duration metric: took 2.512058532s waiting for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.497094   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.500990   15352 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.500999   15352 pod_ready.go:81] duration metric: took 3.899621ms waiting for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.501005   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hxhmn" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.504762   15352 pod_ready.go:92] pod "kube-proxy-hxhmn" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.504770   15352 pod_ready.go:81] duration metric: took 3.760621ms waiting for pod "kube-proxy-hxhmn" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.504775   15352 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.508796   15352 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:18:23.508803   15352 pod_ready.go:81] duration metric: took 4.023396ms waiting for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:23.508810   15352 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" ...
	I0602 11:18:25.519475   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:28.019880   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:30.021312   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:32.520124   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:35.018464   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:37.019378   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:39.020228   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:41.520520   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:44.019685   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:46.021361   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:48.517860   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:50.519722   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:52.520558   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:55.021033   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:57.518515   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:18:59.520949   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:01.521775   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:04.020252   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:06.021659   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:08.522036   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:11.019578   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:13.021252   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:15.519890   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:17.522449   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:20.019069   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:22.022494   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:24.519019   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:26.520994   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:29.019342   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:31.021808   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:33.518558   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:35.522527   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:38.019317   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:40.021350   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:42.519178   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:44.522452   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:47.020277   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:49.020861   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:51.021940   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:53.522777   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:56.022962   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:19:58.023294   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:00.519960   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:02.521430   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:05.022687   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:07.522208   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:10.021463   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:12.519965   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:14.522183   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:17.021383   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:19.023054   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:21.520910   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:23.523643   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:26.021449   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:28.023761   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:30.522348   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:33.024537   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:35.523518   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:37.523926   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:40.023533   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:42.520330   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:44.521363   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:46.523702   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:49.021771   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:51.022021   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:53.022137   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:55.024682   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:20:57.522459   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:00.022039   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:02.022164   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:04.022963   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:06.023102   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:08.520914   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:10.522452   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:13.022353   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:15.024327   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:17.024604   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:19.024700   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:21.521873   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:24.026794   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:26.523991   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:29.022868   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:31.023261   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:33.023747   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:35.024513   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:37.522052   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:39.523349   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:41.523819   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:44.023580   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:46.524426   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:48.524790   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:51.025030   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:53.522632   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:55.523997   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:21:57.526073   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:00.025125   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:02.522387   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:04.525282   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:07.024864   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:09.523673   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:11.524761   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:13.525553   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:16.023071   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:18.023459   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:20.525701   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:23.023773   15352 pod_ready.go:102] pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace has status "Ready":"False"
	I0602 11:22:23.517112   15352 pod_ready.go:81] duration metric: took 4m0.004136963s waiting for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" ...
	E0602 11:22:23.517134   15352 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-5k65t" in "kube-system" namespace to be "Ready" (will not retry!)
	I0602 11:22:23.517161   15352 pod_ready.go:38] duration metric: took 4m14.54933227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:22:23.517193   15352 kubeadm.go:630] restartCluster took 4m24.615456672s
	W0602 11:22:23.517311   15352 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0602 11:22:23.517339   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0602 11:23:01.958873   15352 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.440855806s)
	I0602 11:23:01.958935   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:23:01.968583   15352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 11:23:01.976178   15352 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 11:23:01.976221   15352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 11:23:01.983698   15352 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 11:23:01.983724   15352 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 11:23:02.466453   15352 out.go:204]   - Generating certificates and keys ...
	I0602 11:23:03.315809   15352 out.go:204]   - Booting up control plane ...
	I0602 11:23:09.371051   15352 out.go:204]   - Configuring RBAC rules ...
	I0602 11:23:09.860945   15352 cni.go:95] Creating CNI manager for ""
	I0602 11:23:09.860961   15352 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 11:23:09.860985   15352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 11:23:09.861071   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:09.861074   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=embed-certs-20220602111648-2113 minikube.k8s.io/updated_at=2022_06_02T11_23_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:09.876275   15352 ops.go:34] apiserver oom_adj: -16
	I0602 11:23:10.001073   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:10.577726   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:11.076447   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:11.576911   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:12.076437   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:12.576329   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:13.076844   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:13.577067   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:14.078221   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:14.576893   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:15.076756   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:15.576823   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:16.077283   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:16.577898   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:17.078403   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:17.577411   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:18.077781   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:18.576420   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:19.076555   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:19.576528   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:20.076412   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:20.578098   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:21.077009   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:21.576491   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:22.076566   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:22.576477   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:23.076580   15352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 11:23:23.131668   15352 kubeadm.go:1045] duration metric: took 13.27043884s to wait for elevateKubeSystemPrivileges.
	I0602 11:23:23.131685   15352 kubeadm.go:397] StartCluster complete in 5m24.265555176s
	I0602 11:23:23.131703   15352 settings.go:142] acquiring lock: {Name:mka48fc2cc9e132f8df9370d54d7f09abdd5d2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:23:23.131777   15352 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 11:23:23.132516   15352 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk1f0a80092170cbf11b6fd31a116bac5868ab18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 11:23:23.648470   15352 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220602111648-2113" rescaled to 1
	I0602 11:23:23.648513   15352 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 11:23:23.648518   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 11:23:23.648543   15352 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 11:23:23.648750   15352 config.go:178] Loaded profile config "embed-certs-20220602111648-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 11:23:23.688318   15352 out.go:177] * Verifying Kubernetes components...
	I0602 11:23:23.688398   15352 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.688412   15352 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.688416   15352 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.747362   15352 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220602111648-2113"
	I0602 11:23:23.747382   15352 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220602111648-2113"
	W0602 11:23:23.747391   15352 addons.go:165] addon metrics-server should already be in state true
	I0602 11:23:23.688418   15352 addons.go:65] Setting dashboard=true in profile "embed-certs-20220602111648-2113"
	I0602 11:23:23.747430   15352 addons.go:153] Setting addon dashboard=true in "embed-certs-20220602111648-2113"
	I0602 11:23:23.747436   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	I0602 11:23:23.747346   15352 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220602111648-2113"
	W0602 11:23:23.747460   15352 addons.go:165] addon storage-provisioner should already be in state true
	I0602 11:23:23.747347   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:23:23.747489   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	W0602 11:23:23.747442   15352 addons.go:165] addon dashboard should already be in state true
	I0602 11:23:23.747558   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	I0602 11:23:23.747757   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.747877   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.748464   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.748914   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.764413   15352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 11:23:23.774077   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:23.869691   15352 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220602111648-2113"
	W0602 11:23:23.930483   15352 addons.go:165] addon default-storageclass should already be in state true
	I0602 11:23:23.888755   15352 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 11:23:23.930518   15352 host.go:66] Checking if "embed-certs-20220602111648-2113" exists ...
	I0602 11:23:23.909477   15352 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 11:23:23.930420   15352 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 11:23:23.931179   15352 cli_runner.go:164] Run: docker container inspect embed-certs-20220602111648-2113 --format={{.State.Status}}
	I0602 11:23:23.963267   15352 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220602111648-2113" to be "Ready" ...
	I0602 11:23:23.988507   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 11:23:24.030625   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 11:23:24.009644   15352 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:23:24.030682   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 11:23:24.030501   15352 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0602 11:23:24.030848   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.030873   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.052496   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 11:23:24.052546   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 11:23:24.053233   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.057714   15352 node_ready.go:49] node "embed-certs-20220602111648-2113" has status "Ready":"True"
	I0602 11:23:24.057730   15352 node_ready.go:38] duration metric: took 27.16478ms waiting for node "embed-certs-20220602111648-2113" to be "Ready" ...
	I0602 11:23:24.057737   15352 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:23:24.066339   15352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-ps5fw" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:24.074444   15352 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 11:23:24.074462   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 11:23:24.074538   15352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220602111648-2113
	I0602 11:23:24.146535   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.147249   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.153394   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.156726   15352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54890 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/embed-certs-20220602111648-2113/id_rsa Username:docker}
	I0602 11:23:24.239554   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 11:23:24.239569   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 11:23:24.241122   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 11:23:24.252061   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 11:23:24.253413   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 11:23:24.253424   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 11:23:24.256097   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 11:23:24.256112   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 11:23:24.271732   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 11:23:24.271747   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 11:23:24.344571   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 11:23:24.344589   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 11:23:24.351954   15352 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:23:24.351970   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 11:23:24.368890   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 11:23:24.368902   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 11:23:24.454221   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 11:23:24.454235   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 11:23:24.454670   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 11:23:24.471327   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 11:23:24.471339   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 11:23:24.485285   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 11:23:24.485299   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 11:23:24.548818   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 11:23:24.548835   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 11:23:24.644195   15352 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:23:24.644208   15352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 11:23:24.675479   15352 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 11:23:24.679579   15352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 11:23:24.942852   15352 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220602111648-2113"
	I0602 11:23:25.565936   15352 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0602 11:23:25.587209   15352 addons.go:417] enableAddons completed in 1.938607775s
	I0602 11:23:26.083086   15352 pod_ready.go:102] pod "coredns-64897985d-ps5fw" in "kube-system" namespace has status "Ready":"False"
	I0602 11:23:26.585079   15352 pod_ready.go:92] pod "coredns-64897985d-ps5fw" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.585092   15352 pod_ready.go:81] duration metric: took 2.518690418s waiting for pod "coredns-64897985d-ps5fw" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.585099   15352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-zhfn8" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.589749   15352 pod_ready.go:92] pod "coredns-64897985d-zhfn8" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.589758   15352 pod_ready.go:81] duration metric: took 4.642896ms waiting for pod "coredns-64897985d-zhfn8" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.589768   15352 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.593913   15352 pod_ready.go:92] pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.593921   15352 pod_ready.go:81] duration metric: took 4.149186ms waiting for pod "etcd-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.593929   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.598343   15352 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.598352   15352 pod_ready.go:81] duration metric: took 4.418374ms waiting for pod "kube-apiserver-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.598358   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.603237   15352 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.603246   15352 pod_ready.go:81] duration metric: took 4.883426ms waiting for pod "kube-controller-manager-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.603253   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gcmn9" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.983215   15352 pod_ready.go:92] pod "kube-proxy-gcmn9" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:26.983225   15352 pod_ready.go:81] duration metric: took 379.960719ms waiting for pod "kube-proxy-gcmn9" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:26.983235   15352 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:27.383538   15352 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace has status "Ready":"True"
	I0602 11:23:27.383549   15352 pod_ready.go:81] duration metric: took 400.30138ms waiting for pod "kube-scheduler-embed-certs-20220602111648-2113" in "kube-system" namespace to be "Ready" ...
	I0602 11:23:27.383554   15352 pod_ready.go:38] duration metric: took 3.325734057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 11:23:27.383567   15352 api_server.go:51] waiting for apiserver process to appear ...
	I0602 11:23:27.383609   15352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 11:23:27.397670   15352 api_server.go:71] duration metric: took 3.749073932s to wait for apiserver process to appear ...
	I0602 11:23:27.397685   15352 api_server.go:87] waiting for apiserver healthz status ...
	I0602 11:23:27.397693   15352 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54894/healthz ...
	I0602 11:23:27.402642   15352 api_server.go:266] https://127.0.0.1:54894/healthz returned 200:
	ok
	I0602 11:23:27.403842   15352 api_server.go:140] control plane version: v1.23.6
	I0602 11:23:27.403853   15352 api_server.go:130] duration metric: took 6.160965ms to wait for apiserver health ...
	I0602 11:23:27.403858   15352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 11:23:27.585352   15352 system_pods.go:59] 9 kube-system pods found
	I0602 11:23:27.585372   15352 system_pods.go:61] "coredns-64897985d-ps5fw" [dca916a9-6a4a-407e-af4d-19f98f5aa6c4] Running
	I0602 11:23:27.585400   15352 system_pods.go:61] "coredns-64897985d-zhfn8" [c17ca662-7b52-40a8-b1b1-661983c183d4] Running
	I0602 11:23:27.585405   15352 system_pods.go:61] "etcd-embed-certs-20220602111648-2113" [729f1076-c1d1-40f2-8c74-0716513f8c59] Running
	I0602 11:23:27.585411   15352 system_pods.go:61] "kube-apiserver-embed-certs-20220602111648-2113" [0e9e3a9d-e57f-48f8-a66e-d51393f9e509] Running
	I0602 11:23:27.585416   15352 system_pods.go:61] "kube-controller-manager-embed-certs-20220602111648-2113" [f15aadc2-e920-484a-bb54-c1db87cf9b51] Running
	I0602 11:23:27.585423   15352 system_pods.go:61] "kube-proxy-gcmn9" [9f001538-3e2b-455a-999c-bbb8b7ce2082] Running
	I0602 11:23:27.585430   15352 system_pods.go:61] "kube-scheduler-embed-certs-20220602111648-2113" [7d78f2d1-2fd3-4d17-a604-123c557dc94b] Running
	I0602 11:23:27.585435   15352 system_pods.go:61] "metrics-server-b955d9d8-d6jzn" [2e3f5fb8-e6aa-41f3-a689-f4ebd249a466] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:23:27.585442   15352 system_pods.go:61] "storage-provisioner" [37849889-6793-4475-a0b1-28f0412b616e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:23:27.585450   15352 system_pods.go:74] duration metric: took 181.583327ms to wait for pod list to return data ...
	I0602 11:23:27.585460   15352 default_sa.go:34] waiting for default service account to be created ...
	I0602 11:23:27.780881   15352 default_sa.go:45] found service account: "default"
	I0602 11:23:27.780894   15352 default_sa.go:55] duration metric: took 195.425402ms for default service account to be created ...
	I0602 11:23:27.780901   15352 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 11:23:27.983547   15352 system_pods.go:86] 9 kube-system pods found
	I0602 11:23:27.983561   15352 system_pods.go:89] "coredns-64897985d-ps5fw" [dca916a9-6a4a-407e-af4d-19f98f5aa6c4] Running
	I0602 11:23:27.983566   15352 system_pods.go:89] "coredns-64897985d-zhfn8" [c17ca662-7b52-40a8-b1b1-661983c183d4] Running
	I0602 11:23:27.983569   15352 system_pods.go:89] "etcd-embed-certs-20220602111648-2113" [729f1076-c1d1-40f2-8c74-0716513f8c59] Running
	I0602 11:23:27.983582   15352 system_pods.go:89] "kube-apiserver-embed-certs-20220602111648-2113" [0e9e3a9d-e57f-48f8-a66e-d51393f9e509] Running
	I0602 11:23:27.983587   15352 system_pods.go:89] "kube-controller-manager-embed-certs-20220602111648-2113" [f15aadc2-e920-484a-bb54-c1db87cf9b51] Running
	I0602 11:23:27.983591   15352 system_pods.go:89] "kube-proxy-gcmn9" [9f001538-3e2b-455a-999c-bbb8b7ce2082] Running
	I0602 11:23:27.983597   15352 system_pods.go:89] "kube-scheduler-embed-certs-20220602111648-2113" [7d78f2d1-2fd3-4d17-a604-123c557dc94b] Running
	I0602 11:23:27.983604   15352 system_pods.go:89] "metrics-server-b955d9d8-d6jzn" [2e3f5fb8-e6aa-41f3-a689-f4ebd249a466] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 11:23:27.983611   15352 system_pods.go:89] "storage-provisioner" [37849889-6793-4475-a0b1-28f0412b616e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 11:23:27.983617   15352 system_pods.go:126] duration metric: took 202.708238ms to wait for k8s-apps to be running ...
	I0602 11:23:27.983624   15352 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 11:23:27.983673   15352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 11:23:27.996729   15352 system_svc.go:56] duration metric: took 13.098129ms WaitForService to wait for kubelet.
	I0602 11:23:27.996748   15352 kubeadm.go:572] duration metric: took 4.348143989s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 11:23:27.996779   15352 node_conditions.go:102] verifying NodePressure condition ...
	I0602 11:23:28.181774   15352 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0602 11:23:28.181787   15352 node_conditions.go:123] node cpu capacity is 6
	I0602 11:23:28.181794   15352 node_conditions.go:105] duration metric: took 184.999302ms to run NodePressure ...
	I0602 11:23:28.181802   15352 start.go:213] waiting for startup goroutines ...
	I0602 11:23:28.214838   15352 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0602 11:23:28.236679   15352 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220602111648-2113" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 18:17:55 UTC, end at Thu 2022-06-02 18:24:37 UTC. --
	Jun 02 18:22:50 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:22:50.403618462Z" level=info msg="ignoring event" container=94b7e00a9081ce5c3377a576ba713958be99d44c60e7db71eb41b852aa9b4446 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.469892549Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=52b0d2ee4254759b60d6721bf326faec9c50ab82f8a7fe842ea6790b7c2d91b1
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.527240241Z" level=info msg="ignoring event" container=52b0d2ee4254759b60d6721bf326faec9c50ab82f8a7fe842ea6790b7c2d91b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.634369574Z" level=info msg="ignoring event" container=86200dafbd9cf358dd60a97f4c239f2681e51a7a638658f45c706c59d27724f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.736103019Z" level=info msg="ignoring event" container=c3f925b9e9e7f020fd42beea83f7c7cdf8f798b7a7dfcf5c65ae9f6dc9f6a571 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.835136162Z" level=info msg="ignoring event" container=8ea0dee2c1866f6c78e5cb45742b33e53ef491b28459ed76c2dd163340972f66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:00 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:00.966456948Z" level=info msg="ignoring event" container=cda458dd692b609a25a006deec1dbe258c6f9cd68f11da48016684ebc55836da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:25 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:25.746073975Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:25 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:25.746119539Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:25 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:25.747186190Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:27 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:27.034820255Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:23:27 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:27.263016186Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 02 18:23:29 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:29.085990553Z" level=info msg="ignoring event" container=0f8cdd07d435a722e7d8ef64ce7658592d603989924500e4a60d524fa406adcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:29 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:29.250883510Z" level=info msg="ignoring event" container=2b0f4a27cc00293d7efbcb6fa58acc86538eb112c41cb408cdbf412a3d4f5ac6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:30 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:30.436850569Z" level=info msg="ignoring event" container=57887a908ac97354796360d9926013f2c4f4f04cb22971793db7434569d9e5ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:30 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:30.547965329Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 02 18:23:31 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:31.141596334Z" level=info msg="ignoring event" container=2de37547d73dee06293ff8604ac9806e88299e034162b0416d1da8d019557954 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:23:42 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:42.297489060Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:42 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:42.297535874Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:42 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:42.365599532Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:23:46 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:23:46.906289957Z" level=info msg="ignoring event" container=719c668db198a73cd1140dd0ba0b21c998dba8794cf40c0c36f4958e2301f21b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:24:34.326615590Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:24:34.326667198Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:24:34.327817374Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 dockerd[130]: time="2022-06-02T18:24:34.927809514Z" level=info msg="ignoring event" container=7ac5e7e75b7417d68188735dc1bd57b85aaf572115658757440ce5df75816b6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	7ac5e7e75b741       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   3                   50c0b7d606286
	86bec09400c89       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   55 seconds ago       Running             kubernetes-dashboard        0                   5b77ffce4ef9c
	904d4dbfb80e2       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   d7c87dc9440b3
	3cc96a0578309       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   05bcaf5db0a26
	d194d787baa35       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   47d796f4a407e
	bbe638877cc03       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   53141afdb0d3f
	6de502b252383       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   86e0ae1b95d2d
	8b190f5996d51       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   926a90d207bf2
	5ed407736824f       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   e688fa4f15aa7
	
	* 
	* ==> coredns [3cc96a057830] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220602111648-2113
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220602111648-2113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=embed-certs-20220602111648-2113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T11_23_09_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 18:23:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220602111648-2113
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 18:24:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 18:24:31 +0000   Thu, 02 Jun 2022 18:23:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 18:24:31 +0000   Thu, 02 Jun 2022 18:23:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 18:24:31 +0000   Thu, 02 Jun 2022 18:23:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 18:24:31 +0000   Thu, 02 Jun 2022 18:24:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220602111648-2113
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                19795094-b018-4c71-93fc-a30c871a2c0a
	  Boot ID:                    a475dd08-72ba-4c6d-89c1-75a58adc3783
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-ps5fw                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     76s
	  kube-system                 etcd-embed-certs-20220602111648-2113                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-embed-certs-20220602111648-2113             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-embed-certs-20220602111648-2113    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-gcmn9                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-embed-certs-20220602111648-2113             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 metrics-server-b955d9d8-d6jzn                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-hf9nn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-gg4gx                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 74s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    95s (x4 over 95s)  kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x4 over 95s)  kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  95s (x4 over 95s)  kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientMemory
	  Normal  Starting                 89s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  89s                kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  89s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                78s                kubelet     Node embed-certs-20220602111648-2113 status is now: NodeReady
	  Normal  Starting                 7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                 kubelet     Node embed-certs-20220602111648-2113 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [6de502b25238] <==
	* {"level":"info","ts":"2022-06-02T18:23:04.784Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T18:23:04.784Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T18:23:04.784Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:23:04.784Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-02T18:23:04.976Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220602111648-2113 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T18:23:04.977Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:23:04.978Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T18:23:04.978Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T18:23:04.979Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-02T18:23:04.979Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T18:23:04.979Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2022-06-02T18:23:28.344Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.691637ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T18:23:28.344Z","caller":"traceutil/trace.go:171","msg":"trace[1143572392] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:585; }","duration":"102.843372ms","start":"2022-06-02T18:23:28.241Z","end":"2022-06-02T18:23:28.344Z","steps":["trace[1143572392] 'range keys from in-memory index tree'  (duration: 102.640117ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T18:23:41.911Z","caller":"traceutil/trace.go:171","msg":"trace[1951724529] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"102.515767ms","start":"2022-06-02T18:23:41.808Z","end":"2022-06-02T18:23:41.911Z","steps":["trace[1951724529] 'process raft request'  (duration: 101.291674ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:24:38 up  1:12,  0 users,  load average: 0.48, 0.59, 0.86
	Linux embed-certs-20220602111648-2113 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [8b190f5996d5] <==
	* I0602 18:23:08.215088       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 18:23:08.240642       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 18:23:08.285171       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0602 18:23:08.288731       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0602 18:23:08.289452       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 18:23:08.292009       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 18:23:09.081645       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 18:23:09.586876       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 18:23:09.593677       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0602 18:23:09.602213       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 18:23:09.775161       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 18:23:21.988418       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 18:23:22.839571       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 18:23:23.565594       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 18:23:24.901439       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.192.133]
	I0602 18:23:25.493503       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.211.224]
	I0602 18:23:25.506582       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.42.109]
	W0602 18:23:25.766488       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:23:25.766609       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:23:25.766636       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0602 18:24:30.422135       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 18:24:30.422234       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 18:24:30.422241       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [bbe638877cc0] <==
	* I0602 18:23:24.783742       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 18:23:24.787727       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0602 18:23:24.791820       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0602 18:23:24.795292       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-d6jzn"
	I0602 18:23:25.405484       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 18:23:25.409809       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:23:25.413265       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0602 18:23:25.417074       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:23:25.417106       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:23:25.417541       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	I0602 18:23:25.420518       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:23:25.423825       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0602 18:23:25.423880       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:23:25.423841       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:23:25.430356       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:23:25.430539       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 18:23:25.433334       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 18:23:25.433383       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 18:23:25.462068       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-gg4gx"
	I0602 18:23:25.465557       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-hf9nn"
	W0602 18:23:30.026329       1 endpointslice_controller.go:306] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0602 18:23:52.033611       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 18:23:52.545393       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0602 18:24:30.635858       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 18:24:30.704345       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d194d787baa3] <==
	* I0602 18:23:23.495206       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 18:23:23.495261       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 18:23:23.495305       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 18:23:23.517209       1 server_others.go:206] "Using iptables Proxier"
	I0602 18:23:23.517259       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 18:23:23.517267       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 18:23:23.517300       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 18:23:23.517632       1 server.go:656] "Version info" version="v1.23.6"
	I0602 18:23:23.561288       1 config.go:317] "Starting service config controller"
	I0602 18:23:23.561990       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 18:23:23.561482       1 config.go:226] "Starting endpoint slice config controller"
	I0602 18:23:23.562008       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 18:23:23.662567       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 18:23:23.662608       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5ed407736824] <==
	* W0602 18:23:06.983759       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:06.983768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 18:23:06.983851       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:06.983879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 18:23:06.983982       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 18:23:06.984010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 18:23:06.984371       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 18:23:06.984380       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0602 18:23:06.984429       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:23:06.984488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:23:06.984631       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:06.984640       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 18:23:06.984670       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 18:23:06.984795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 18:23:07.827418       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 18:23:07.827458       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 18:23:07.897392       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:07.897429       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 18:23:08.044637       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 18:23:08.044674       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 18:23:08.051488       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 18:23:08.051526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0602 18:23:08.103626       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0602 18:23:08.103665       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0602 18:23:10.479922       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 18:17:55 UTC, end at Thu 2022-06-02 18:24:39 UTC. --
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223169    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f001538-3e2b-455a-999c-bbb8b7ce2082-xtables-lock\") pod \"kube-proxy-gcmn9\" (UID: \"9f001538-3e2b-455a-999c-bbb8b7ce2082\") " pod="kube-system/kube-proxy-gcmn9"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223235    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f001538-3e2b-455a-999c-bbb8b7ce2082-lib-modules\") pod \"kube-proxy-gcmn9\" (UID: \"9f001538-3e2b-455a-999c-bbb8b7ce2082\") " pod="kube-system/kube-proxy-gcmn9"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223257    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8bqf\" (UniqueName: \"kubernetes.io/projected/dca916a9-6a4a-407e-af4d-19f98f5aa6c4-kube-api-access-h8bqf\") pod \"coredns-64897985d-ps5fw\" (UID: \"dca916a9-6a4a-407e-af4d-19f98f5aa6c4\") " pod="kube-system/coredns-64897985d-ps5fw"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223286    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2lw2\" (UniqueName: \"kubernetes.io/projected/37849889-6793-4475-a0b1-28f0412b616e-kube-api-access-d2lw2\") pod \"storage-provisioner\" (UID: \"37849889-6793-4475-a0b1-28f0412b616e\") " pod="kube-system/storage-provisioner"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223309    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a8426b52-aeeb-4e11-8366-7cbf31b79047-tmp-volume\") pod \"kubernetes-dashboard-cd7c84bfc-gg4gx\" (UID: \"a8426b52-aeeb-4e11-8366-7cbf31b79047\") " pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-gg4gx"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223342    7185 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6wpj\" (UniqueName: \"kubernetes.io/projected/9f001538-3e2b-455a-999c-bbb8b7ce2082-kube-api-access-n6wpj\") pod \"kube-proxy-gcmn9\" (UID: \"9f001538-3e2b-455a-999c-bbb8b7ce2082\") " pod="kube-system/kube-proxy-gcmn9"
	Jun 02 18:24:32 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:32.223353    7185 reconciler.go:157] "Reconciler: start to sync state"
	Jun 02 18:24:33 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:33.397821    7185 request.go:665] Waited for 1.194873126s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 02 18:24:33 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:33.470016    7185 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220602111648-2113\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220602111648-2113"
	Jun 02 18:24:33 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:33.685527    7185 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220602111648-2113\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220602111648-2113"
	Jun 02 18:24:33 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:33.853439    7185 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220602111648-2113\" already exists" pod="kube-system/etcd-embed-certs-20220602111648-2113"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.001587    7185 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220602111648-2113\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220602111648-2113"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.328275    7185 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.328304    7185 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.328384    7185 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bfgw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMess
agePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-d6jzn_kube-system(2e3f5fb8-e6aa-41f3-a689-f4ebd249a466): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:34.328409    7185 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-d6jzn" podUID=2e3f5fb8-e6aa-41f3-a689-f4ebd249a466
	Jun 02 18:24:34 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:34.601453    7185 scope.go:110] "RemoveContainer" containerID="719c668db198a73cd1140dd0ba0b21c998dba8794cf40c0c36f4958e2301f21b"
	Jun 02 18:24:35 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:35.218657    7185 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-hf9nn through plugin: invalid network status for"
	Jun 02 18:24:35 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:35.224684    7185 scope.go:110] "RemoveContainer" containerID="719c668db198a73cd1140dd0ba0b21c998dba8794cf40c0c36f4958e2301f21b"
	Jun 02 18:24:35 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:35.224977    7185 scope.go:110] "RemoveContainer" containerID="7ac5e7e75b7417d68188735dc1bd57b85aaf572115658757440ce5df75816b6b"
	Jun 02 18:24:35 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:35.225343    7185 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-hf9nn_kubernetes-dashboard(434b02ce-7d60-4db2-979f-96ada075b5f6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-hf9nn" podUID=434b02ce-7d60-4db2-979f-96ada075b5f6
	Jun 02 18:24:36 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:36.230644    7185 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-hf9nn through plugin: invalid network status for"
	Jun 02 18:24:36 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:36.913863    7185 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Jun 02 18:24:37 embed-certs-20220602111648-2113 kubelet[7185]: I0602 18:24:37.438917    7185 scope.go:110] "RemoveContainer" containerID="7ac5e7e75b7417d68188735dc1bd57b85aaf572115658757440ce5df75816b6b"
	Jun 02 18:24:37 embed-certs-20220602111648-2113 kubelet[7185]: E0602 18:24:37.439106    7185 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-hf9nn_kubernetes-dashboard(434b02ce-7d60-4db2-979f-96ada075b5f6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-hf9nn" podUID=434b02ce-7d60-4db2-979f-96ada075b5f6
	
	* 
	* ==> kubernetes-dashboard [86bec09400c8] <==
	* 2022/06/02 18:23:42 Using namespace: kubernetes-dashboard
	2022/06/02 18:23:42 Using in-cluster config to connect to apiserver
	2022/06/02 18:23:42 Using secret token for csrf signing
	2022/06/02 18:23:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/02 18:23:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/02 18:23:42 Successful initial request to the apiserver, version: v1.23.6
	2022/06/02 18:23:42 Generating JWE encryption key
	2022/06/02 18:23:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/02 18:23:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/02 18:23:42 Initializing JWE encryption key from synchronized object
	2022/06/02 18:23:42 Creating in-cluster Sidecar client
	2022/06/02 18:23:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:23:42 Serving insecurely on HTTP port: 9090
	2022/06/02 18:24:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/02 18:23:42 Starting overwatch
	
	* 
	* ==> storage-provisioner [904d4dbfb80e] <==
	* I0602 18:23:25.938956       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 18:23:25.946968       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 18:23:25.947068       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 18:23:25.952642       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 18:23:25.952799       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220602111648-2113_635a4672-3cc6-4467-9beb-e2412c23cc74!
	I0602 18:23:25.952850       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"837d08af-65b6-4fa1-bdba-c1b746bcd758", APIVersion:"v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220602111648-2113_635a4672-3cc6-4467-9beb-e2412c23cc74 became leader
	I0602 18:23:26.053131       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220602111648-2113_635a4672-3cc6-4467-9beb-e2412c23cc74!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220602111648-2113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-d6jzn
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220602111648-2113 describe pod metrics-server-b955d9d8-d6jzn
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220602111648-2113 describe pod metrics-server-b955d9d8-d6jzn: exit status 1 (306.270828ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-d6jzn" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220602111648-2113 describe pod metrics-server-b955d9d8-d6jzn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (43.59s)

                                                
                                    

Test pass (242/282)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21.57
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.32
10 TestDownloadOnly/v1.23.6/json-events 6.83
11 TestDownloadOnly/v1.23.6/preload-exists 0
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.75
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.43
18 TestDownloadOnlyKic 7.09
19 TestBinaryMirror 1.71
20 TestOffline 43.91
22 TestAddons/Setup 108.01
26 TestAddons/parallel/MetricsServer 5.65
27 TestAddons/parallel/HelmTiller 13.35
29 TestAddons/parallel/CSI 39.43
31 TestAddons/serial/GCPAuth 14.66
32 TestAddons/StoppedEnableDisable 13
33 TestCertOptions 29.38
34 TestCertExpiration 213.58
35 TestDockerFlags 28.79
36 TestForceSystemdFlag 29.68
37 TestForceSystemdEnv 28.51
39 TestHyperKitDriverInstallOrUpdate 7.62
42 TestErrorSpam/setup 23.05
43 TestErrorSpam/start 2.24
44 TestErrorSpam/status 1.32
45 TestErrorSpam/pause 1.92
46 TestErrorSpam/unpause 1.95
47 TestErrorSpam/stop 13.23
50 TestFunctional/serial/CopySyncFile 0
51 TestFunctional/serial/StartWithProxy 39.6
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 6.23
54 TestFunctional/serial/KubeContext 0.03
55 TestFunctional/serial/KubectlGetPods 1.47
58 TestFunctional/serial/CacheCmd/cache/add_remote 4.33
59 TestFunctional/serial/CacheCmd/cache/add_local 1.77
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
61 TestFunctional/serial/CacheCmd/cache/list 0.07
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.44
63 TestFunctional/serial/CacheCmd/cache/cache_reload 2.42
64 TestFunctional/serial/CacheCmd/cache/delete 0.14
65 TestFunctional/serial/MinikubeKubectlCmd 0.49
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.62
67 TestFunctional/serial/ExtraConfig 33.24
68 TestFunctional/serial/ComponentHealth 0.05
69 TestFunctional/serial/LogsCmd 3.21
70 TestFunctional/serial/LogsFileCmd 3.27
72 TestFunctional/parallel/ConfigCmd 0.51
73 TestFunctional/parallel/DashboardCmd 13.17
74 TestFunctional/parallel/DryRun 1.29
75 TestFunctional/parallel/InternationalLanguage 0.63
76 TestFunctional/parallel/StatusCmd 1.32
79 TestFunctional/parallel/ServiceCmd 16.04
81 TestFunctional/parallel/AddonsCmd 0.44
82 TestFunctional/parallel/PersistentVolumeClaim 28.5
84 TestFunctional/parallel/SSHCmd 0.92
85 TestFunctional/parallel/CpCmd 1.75
86 TestFunctional/parallel/MySQL 19.25
87 TestFunctional/parallel/FileSync 0.43
88 TestFunctional/parallel/CertSync 2.64
92 TestFunctional/parallel/NodeLabels 0.04
94 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
96 TestFunctional/parallel/Version/short 0.16
97 TestFunctional/parallel/Version/components 0.65
98 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
99 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
100 TestFunctional/parallel/ImageCommands/ImageListJson 0.38
101 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
102 TestFunctional/parallel/ImageCommands/ImageBuild 3.13
103 TestFunctional/parallel/ImageCommands/Setup 1.85
104 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.15
105 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.36
106 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.91
107 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.35
108 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
109 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.89
110 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.44
111 TestFunctional/parallel/DockerEnv/bash 1.67
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.41
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.16
119 TestFunctional/parallel/MountCmd/any-port 11.03
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/MountCmd/specific-port 2.52
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.6
128 TestFunctional/parallel/ProfileCmd/profile_list 0.54
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.6
130 TestFunctional/delete_addon-resizer_images 0.16
131 TestFunctional/delete_my-image_image 0.07
132 TestFunctional/delete_minikube_cached_images 0.07
142 TestJSONOutput/start/Command 41.14
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.69
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.64
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 12.39
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.76
167 TestKicCustomNetwork/create_custom_network 27.08
168 TestKicCustomNetwork/use_default_bridge_network 24.68
169 TestKicExistingNetwork 26.97
170 TestKicCustomSubnet 25.28
171 TestMainNoArgs 0.07
172 TestMinikubeProfile 56.99
175 TestMountStart/serial/StartWithMountFirst 7.42
176 TestMountStart/serial/VerifyMountFirst 0.43
177 TestMountStart/serial/StartWithMountSecond 7.11
178 TestMountStart/serial/VerifyMountSecond 0.44
179 TestMountStart/serial/DeleteFirst 2.4
180 TestMountStart/serial/VerifyMountPostDelete 0.42
181 TestMountStart/serial/Stop 1.61
182 TestMountStart/serial/RestartStopped 4.99
183 TestMountStart/serial/VerifyMountPostStop 0.42
186 TestMultiNode/serial/FreshStart2Nodes 81.05
187 TestMultiNode/serial/DeployApp2Nodes 6.05
188 TestMultiNode/serial/PingHostFrom2Pods 0.8
189 TestMultiNode/serial/AddNode 25.64
190 TestMultiNode/serial/ProfileList 0.51
191 TestMultiNode/serial/CopyFile 16.31
192 TestMultiNode/serial/StopNode 14.17
193 TestMultiNode/serial/StartAfterStop 25.31
194 TestMultiNode/serial/RestartKeepsNodes 116.11
195 TestMultiNode/serial/DeleteNode 19.07
196 TestMultiNode/serial/StopMultiNode 25.29
197 TestMultiNode/serial/RestartMultiNode 59.66
198 TestMultiNode/serial/ValidateNameConflict 27.57
204 TestScheduledStopUnix 97.49
205 TestSkaffold 56.09
207 TestInsufficientStorage 12.91
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.22
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.03
225 TestStoppedBinaryUpgrade/Setup 0.74
227 TestStoppedBinaryUpgrade/MinikubeLogs 3.66
229 TestPause/serial/Start 39.15
230 TestPause/serial/SecondStartNoReconfiguration 6.63
231 TestPause/serial/Pause 0.75
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.37
242 TestNoKubernetes/serial/StartWithK8s 25.56
243 TestNoKubernetes/serial/StartWithStopK8s 18.91
244 TestNoKubernetes/serial/Start 6.52
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
246 TestNoKubernetes/serial/ProfileList 4.75
247 TestNoKubernetes/serial/Stop 1.76
248 TestNoKubernetes/serial/StartNoArgs 6.15
249 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
250 TestNetworkPlugins/group/auto/Start 43.11
251 TestNetworkPlugins/group/kindnet/Start 48.78
252 TestNetworkPlugins/group/auto/KubeletFlags 0.45
253 TestNetworkPlugins/group/auto/NetCatPod 11.92
254 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
255 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
256 TestNetworkPlugins/group/kindnet/NetCatPod 11.65
257 TestNetworkPlugins/group/auto/DNS 0.12
258 TestNetworkPlugins/group/auto/Localhost 0.13
259 TestNetworkPlugins/group/auto/HairPin 5.11
261 TestNetworkPlugins/group/kindnet/DNS 0.14
262 TestNetworkPlugins/group/kindnet/Localhost 0.14
263 TestNetworkPlugins/group/kindnet/HairPin 0.15
264 TestNetworkPlugins/group/calico/Start 67.08
265 TestNetworkPlugins/group/calico/ControllerPod 5.02
266 TestNetworkPlugins/group/calico/KubeletFlags 0.43
267 TestNetworkPlugins/group/calico/NetCatPod 10.72
268 TestNetworkPlugins/group/calico/DNS 0.12
269 TestNetworkPlugins/group/calico/Localhost 0.11
270 TestNetworkPlugins/group/calico/HairPin 0.1
271 TestNetworkPlugins/group/false/Start 40.27
272 TestNetworkPlugins/group/false/KubeletFlags 0.43
273 TestNetworkPlugins/group/false/NetCatPod 11.67
274 TestNetworkPlugins/group/false/DNS 0.12
275 TestNetworkPlugins/group/false/Localhost 0.11
276 TestNetworkPlugins/group/false/HairPin 5.11
277 TestNetworkPlugins/group/bridge/Start 40.36
278 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
279 TestNetworkPlugins/group/bridge/NetCatPod 11.74
280 TestNetworkPlugins/group/bridge/DNS 0.12
281 TestNetworkPlugins/group/bridge/Localhost 0.12
282 TestNetworkPlugins/group/bridge/HairPin 0.13
283 TestNetworkPlugins/group/enable-default-cni/Start 42.94
284 TestNetworkPlugins/group/kubenet/Start 39.69
285 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
286 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.96
287 TestNetworkPlugins/group/kubenet/KubeletFlags 0.44
288 TestNetworkPlugins/group/kubenet/NetCatPod 13.67
289 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
290 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
291 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
294 TestNetworkPlugins/group/kubenet/DNS 0.12
295 TestNetworkPlugins/group/kubenet/Localhost 0.1
296 TestNetworkPlugins/group/kubenet/HairPin 0.11
298 TestStartStop/group/no-preload/serial/FirstStart 50.4
299 TestStartStop/group/no-preload/serial/DeployApp 9.71
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.73
301 TestStartStop/group/no-preload/serial/Stop 12.52
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
303 TestStartStop/group/no-preload/serial/SecondStart 337.53
306 TestStartStop/group/old-k8s-version/serial/Stop 1.62
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.02
310 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.58
311 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.46
314 TestStartStop/group/default-k8s-different-port/serial/FirstStart 39.82
315 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.68
316 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.74
317 TestStartStop/group/default-k8s-different-port/serial/Stop 12.58
318 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.32
319 TestStartStop/group/default-k8s-different-port/serial/SecondStart 326.84
321 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 10.01
322 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.59
323 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.46
326 TestStartStop/group/newest-cni/serial/FirstStart 36.98
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.74
329 TestStartStop/group/newest-cni/serial/Stop 12.56
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.33
331 TestStartStop/group/newest-cni/serial/SecondStart 17.52
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.52
337 TestStartStop/group/embed-certs/serial/FirstStart 41.36
338 TestStartStop/group/embed-certs/serial/DeployApp 10.73
339 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.72
340 TestStartStop/group/embed-certs/serial/Stop 12.56
341 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
342 TestStartStop/group/embed-certs/serial/SecondStart 334.6
344 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 21.01
345 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.58
346 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.5
x
+
TestDownloadOnly/v1.16.0/json-events (21.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220602101144-2113 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220602101144-2113 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (21.564903128s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (21.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220602101144-2113
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220602101144-2113: exit status 85 (316.327314ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 10:11:44
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 10:11:44.965582    2124 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:11:44.965780    2124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:11:44.965786    2124 out.go:309] Setting ErrFile to fd 2...
	I0602 10:11:44.965789    2124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:11:44.965898    2124 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	W0602 10:11:44.965992    2124 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/config/config.json: no such file or directory
	I0602 10:11:44.966427    2124 out.go:303] Setting JSON to true
	I0602 10:11:44.982001    2124 start.go:115] hostinfo: {"hostname":"37309.local","uptime":674,"bootTime":1654189230,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:11:44.982114    2124 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:11:45.006344    2124 out.go:97] [download-only-20220602101144-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:11:45.006450    2124 notify.go:193] Checking for updates...
	W0602 10:11:45.006499    2124 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball: no such file or directory
	I0602 10:11:45.026730    2124 out.go:169] MINIKUBE_LOCATION=14269
	I0602 10:11:45.068889    2124 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:11:45.110892    2124 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:11:45.131970    2124 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:11:45.152841    2124 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	W0602 10:11:45.194818    2124 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0602 10:11:45.195046    2124 driver.go:358] Setting default libvirt URI to qemu:///system
	W0602 10:11:45.258776    2124 docker.go:113] docker version returned error: exit status 1
	I0602 10:11:45.279688    2124 out.go:97] Using the docker driver based on user configuration
	I0602 10:11:45.279711    2124 start.go:284] selected driver: docker
	I0602 10:11:45.279719    2124 start.go:806] validating driver "docker" against <nil>
	I0602 10:11:45.279845    2124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:11:45.397508    2124 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:11:45.418858    2124 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0602 10:11:45.439721    2124 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0602 10:11:45.481683    2124 out.go:169] 
	W0602 10:11:45.502844    2124 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0602 10:11:45.523505    2124 out.go:169] 
	I0602 10:11:45.565741    2124 out.go:169] 
	W0602 10:11:45.586508    2124 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0602 10:11:45.586620    2124 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0602 10:11:45.586663    2124 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0602 10:11:45.607757    2124 out.go:169] 
	I0602 10:11:45.628855    2124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:11:45.747971    2124 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0602 10:11:45.769577    2124 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0602 10:11:45.769648    2124 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 10:11:45.815757    2124 out.go:169] 
	W0602 10:11:45.836513    2124 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0602 10:11:45.836615    2124 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0602 10:11:45.836659    2124 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0602 10:11:45.857616    2124 out.go:169] 
	I0602 10:11:45.899509    2124 out.go:169] 
	W0602 10:11:45.920986    2124 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0602 10:11:45.941652    2124 out.go:169] 
	I0602 10:11:45.962746    2124 start_flags.go:373] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0602 10:11:45.962857    2124 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0602 10:11:45.983910    2124 out.go:169] Using Docker Desktop driver with the root privilege
	I0602 10:11:46.004885    2124 cni.go:95] Creating CNI manager for ""
	I0602 10:11:46.004904    2124 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:11:46.004919    2124 start_flags.go:306] config:
	{Name:download-only-20220602101144-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220602101144-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:11:46.033104    2124 out.go:97] Starting control plane node download-only-20220602101144-2113 in cluster download-only-20220602101144-2113
	I0602 10:11:46.033135    2124 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 10:11:46.053465    2124 out.go:97] Pulling base image ...
	I0602 10:11:46.053519    2124 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 10:11:46.053550    2124 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 10:11:46.053671    2124 cache.go:107] acquiring lock: {Name:mkdde9f9d80d920e7e403c8a91a985aa38c1e9d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:11:46.053682    2124 cache.go:107] acquiring lock: {Name:mk2b46d74084f11dbd7eb4dfcfef598e311dae00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:11:46.053748    2124 cache.go:107] acquiring lock: {Name:mkfcb7367b4e2601ce0873c20cba2590b80288e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:11:46.054459    2124 cache.go:107] acquiring lock: {Name:mk71cc5a6f9c75b0624a23a2bd3838c2853f3adf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:11:46.054546    2124 cache.go:107] acquiring lock: {Name:mk452e994336d5f7189f53453a369988233da169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:11:46.054602    2124 cache.go:107] acquiring lock: {Name:mke567f1e8fde223f1b623ae1004bdaee8eba9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:11:46.054609    2124 cache.go:107] acquiring lock: {Name:mkbc45b0265f4ce00d5ee8d5500704056d9a3112 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:11:46.054567    2124 cache.go:107] acquiring lock: {Name:mk6fcbff57a34cd9e4414c3fff20ff489a1c1292 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 10:11:46.054771    2124 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/download-only-20220602101144-2113/config.json ...
	I0602 10:11:46.054854    2124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/download-only-20220602101144-2113/config.json: {Name:mk22012b1d3180b60a9fcbfffbfda5bbb8df5e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 10:11:46.055315    2124 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0602 10:11:46.055173    2124 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0602 10:11:46.055070    2124 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0602 10:11:46.055386    2124 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0602 10:11:46.055398    2124 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0602 10:11:46.055598    2124 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 10:11:46.055606    2124 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0602 10:11:46.055626    2124 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0602 10:11:46.055909    2124 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 10:11:46.056309    2124 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0602 10:11:46.056312    2124 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0602 10:11:46.056330    2124 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0602 10:11:46.061285    2124 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0602 10:11:46.062215    2124 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0602 10:11:46.062461    2124 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0602 10:11:46.063111    2124 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0602 10:11:46.063356    2124 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0602 10:11:46.063403    2124 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0602 10:11:46.063522    2124 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0602 10:11:46.063792    2124 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0602 10:11:46.116893    2124 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0602 10:11:46.117123    2124 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0602 10:11:46.117250    2124 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0602 10:11:46.603030    2124 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0602 10:11:46.697462    2124 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0602 10:11:46.700532    2124 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0602 10:11:46.700814    2124 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0602 10:11:46.700828    2124 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 647.079418ms
	I0602 10:11:46.700838    2124 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0602 10:11:46.702975    2124 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0602 10:11:46.711654    2124 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0602 10:11:46.751858    2124 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0602 10:11:46.810488    2124 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0602 10:11:46.867726    2124 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0602 10:11:48.689287    2124 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0602 10:11:48.689305    2124 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.635620556s
	I0602 10:11:48.689317    2124 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0602 10:11:48.866836    2124 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0602 10:11:48.866852    2124 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 2.813077397s
	I0602 10:11:48.866860    2124 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0602 10:11:49.901953    2124 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0602 10:11:50.202028    2124 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0602 10:11:50.202048    2124 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 4.147751358s
	I0602 10:11:50.202058    2124 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0602 10:11:50.219586    2124 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0602 10:11:50.219601    2124 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 4.165888459s
	I0602 10:11:50.219610    2124 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0602 10:11:50.885333    2124 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0602 10:11:50.885349    2124 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 4.830777291s
	I0602 10:11:50.885357    2124 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0602 10:11:51.007923    2124 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0602 10:11:51.007939    2124 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 4.953468672s
	I0602 10:11:51.007948    2124 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0602 10:11:51.677195    2124 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0602 10:11:51.677211    2124 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 5.623404344s
	I0602 10:11:51.677219    2124 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0602 10:11:51.677232    2124 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220602101144-2113"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (6.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220602101144-2113 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220602101144-2113 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker : (6.833630227s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (6.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220602101144-2113
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220602101144-2113: exit status 85 (285.180921ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 10:12:07
	Running on machine: 37309
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 10:12:07.096468    2179 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:12:07.096722    2179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:12:07.096728    2179 out.go:309] Setting ErrFile to fd 2...
	I0602 10:12:07.096732    2179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:12:07.096841    2179 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	W0602 10:12:07.096954    2179 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/config/config.json: no such file or directory
	I0602 10:12:07.097103    2179 out.go:303] Setting JSON to true
	I0602 10:12:07.112557    2179 start.go:115] hostinfo: {"hostname":"37309.local","uptime":697,"bootTime":1654189230,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:12:07.112663    2179 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:12:07.134597    2179 out.go:97] [download-only-20220602101144-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:12:07.134705    2179 notify.go:193] Checking for updates...
	W0602 10:12:07.134701    2179 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball: no such file or directory
	I0602 10:12:07.155283    2179 out.go:169] MINIKUBE_LOCATION=14269
	I0602 10:12:07.176459    2179 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:12:07.197318    2179 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:12:07.218409    2179 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:12:07.260466    2179 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	W0602 10:12:07.304597    2179 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0602 10:12:07.305249    2179 config.go:178] Loaded profile config "download-only-20220602101144-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0602 10:12:07.305331    2179 start.go:714] api.Load failed for download-only-20220602101144-2113: filestore "download-only-20220602101144-2113": Docker machine "download-only-20220602101144-2113" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0602 10:12:07.305406    2179 driver.go:358] Setting default libvirt URI to qemu:///system
	W0602 10:12:07.305440    2179 start.go:714] api.Load failed for download-only-20220602101144-2113: filestore "download-only-20220602101144-2113": Docker machine "download-only-20220602101144-2113" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0602 10:12:07.376460    2179 docker.go:137] docker version: linux-20.10.14
	I0602 10:12:07.376600    2179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:12:07.504964    2179 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-06-02 17:12:07.439647091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:12:07.526122    2179 out.go:97] Using the docker driver based on existing profile
	I0602 10:12:07.526158    2179 start.go:284] selected driver: docker
	I0602 10:12:07.526167    2179 start.go:806] validating driver "docker" against &{Name:download-only-20220602101144-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220602101144-2113 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:12:07.526576    2179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:12:07.651670    2179 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-06-02 17:12:07.590904958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:12:07.653740    2179 cni.go:95] Creating CNI manager for ""
	I0602 10:12:07.653767    2179 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 10:12:07.653782    2179 start_flags.go:306] config:
	{Name:download-only-20220602101144-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220602101144-2113 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:12:07.675086    2179 out.go:97] Starting control plane node download-only-20220602101144-2113 in cluster download-only-20220602101144-2113
	I0602 10:12:07.675136    2179 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 10:12:07.696874    2179 out.go:97] Pulling base image ...
	I0602 10:12:07.696988    2179 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 10:12:07.697090    2179 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 10:12:07.761737    2179 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0602 10:12:07.761888    2179 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0602 10:12:07.761905    2179 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0602 10:12:07.761908    2179 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0602 10:12:07.761916    2179 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0602 10:12:07.762082    2179 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 10:12:07.762097    2179 cache.go:57] Caching tarball of preloaded images
	I0602 10:12:07.762239    2179 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 10:12:07.783828    2179 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0602 10:12:07.783915    2179 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0602 10:12:07.875849    2179 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4?checksum=md5:a6c3f222f3cce2a88e27e126d64eb717 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220602101144-2113"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.75s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.75s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220602101144-2113
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.09s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220602101215-2113 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220602101215-2113 --force --alsologtostderr --driver=docker : (5.947043241s)
helpers_test.go:175: Cleaning up "download-docker-20220602101215-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220602101215-2113
--- PASS: TestDownloadOnlyKic (7.09s)

                                                
                                    
x
+
TestBinaryMirror (1.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220602101222-2113 --alsologtostderr --binary-mirror http://127.0.0.1:49611 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220602101222-2113 --alsologtostderr --binary-mirror http://127.0.0.1:49611 --driver=docker : (1.049709117s)
helpers_test.go:175: Cleaning up "binary-mirror-20220602101222-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220602101222-2113
--- PASS: TestBinaryMirror (1.71s)

                                                
                                    
x
+
TestOffline (43.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220602104455-2113 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220602104455-2113 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (40.892341848s)
helpers_test.go:175: Cleaning up "offline-docker-20220602104455-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220602104455-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220602104455-2113: (3.020197425s)
--- PASS: TestOffline (43.91s)

                                                
                                    
x
+
TestAddons/Setup (108.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220602101224-2113 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220602101224-2113 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m48.0072645s)
--- PASS: TestAddons/Setup (108.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 2.008215ms
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-bd6f4dd56-czfr7" [7984b65e-1ca5-4152-90dd-afce5685d990] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006737474s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220602101224-2113 top pods -n kube-system
addons_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220602101224-2113 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 10.748768ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-g5765" [c58df558-7cd8-4b9e-9733-40e9a2269964] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.011719562s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220602101224-2113 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220602101224-2113 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.844450298s)
addons_test.go:440: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220602101224-2113 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 7.689949ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220602101224-2113 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220602101224-2113 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220602101224-2113 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220602101224-2113 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [67e4d74b-1ec2-4f3b-bcfb-03a69c9bf7d2] Pending
helpers_test.go:342: "task-pv-pod" [67e4d74b-1ec2-4f3b-bcfb-03a69c9bf7d2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [67e4d74b-1ec2-4f3b-bcfb-03a69c9bf7d2] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.006499549s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220602101224-2113 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220602101224-2113 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220602101224-2113 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220602101224-2113 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220602101224-2113 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220602101224-2113 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220602101224-2113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220602101224-2113 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [31561626-9197-4f88-b0a8-769d120a861a] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [31561626-9197-4f88-b0a8-769d120a861a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [31561626-9197-4f88-b0a8-769d120a861a] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.009881352s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220602101224-2113 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220602101224-2113 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220602101224-2113 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220602101224-2113 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220602101224-2113 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.770979228s)
addons_test.go:592: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220602101224-2113 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (14.66s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220602101224-2113 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e2f2abf7-4a6d-4c9a-92fa-db8556348b54] Pending
helpers_test.go:342: "busybox" [e2f2abf7-4a6d-4c9a-92fa-db8556348b54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [e2f2abf7-4a6d-4c9a-92fa-db8556348b54] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.008059918s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220602101224-2113 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:628: (dbg) Run:  kubectl --context addons-20220602101224-2113 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220602101224-2113 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220602101224-2113 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220602101224-2113 addons disable gcp-auth --alsologtostderr -v=1: (5.851373878s)
--- PASS: TestAddons/serial/GCPAuth (14.66s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220602101224-2113
addons_test.go:132: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220602101224-2113: (12.618898563s)
addons_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220602101224-2113
addons_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220602101224-2113
--- PASS: TestAddons/StoppedEnableDisable (13.00s)

                                                
                                    
x
+
TestCertOptions (29.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220602104618-2113 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220602104618-2113 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (25.407079821s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220602104618-2113 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220602104618-2113 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220602104618-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220602104618-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220602104618-2113: (2.989439447s)
--- PASS: TestCertOptions (29.38s)

                                                
                                    
x
+
TestCertExpiration (213.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220602104608-2113 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220602104608-2113 --memory=2048 --cert-expiration=3m --driver=docker : (25.280022821s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220602104608-2113 --memory=2048 --cert-expiration=8760h --driver=docker 
E0602 10:49:34.153819    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220602104608-2113 --memory=2048 --cert-expiration=8760h --driver=docker : (5.359128984s)
helpers_test.go:175: Cleaning up "cert-expiration-20220602104608-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220602104608-2113
E0602 10:49:39.276578    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220602104608-2113: (2.942173697s)
--- PASS: TestCertExpiration (213.58s)

                                                
                                    
x
+
TestDockerFlags (28.79s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220602104549-2113 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220602104549-2113 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (24.980007309s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220602104549-2113 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220602104549-2113 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220602104549-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220602104549-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220602104549-2113: (2.900401747s)
--- PASS: TestDockerFlags (28.79s)

                                                
                                    
x
+
TestForceSystemdFlag (29.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220602104538-2113 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220602104538-2113 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (26.224222867s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220602104538-2113 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220602104538-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220602104538-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220602104538-2113: (2.937141433s)
--- PASS: TestForceSystemdFlag (29.68s)

                                                
                                    
x
+
TestForceSystemdEnv (28.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220602104521-2113 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220602104521-2113 --memory=2048 --alsologtostderr -v=5 --driver=docker : (24.934290357s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220602104521-2113 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220602104521-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220602104521-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220602104521-2113: (3.038823696s)
--- PASS: TestForceSystemdEnv (28.51s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.62s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.62s)

                                                
                                    
x
+
TestErrorSpam/setup (23.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220602101536-2113 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220602101536-2113 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 --driver=docker : (23.051233155s)
--- PASS: TestErrorSpam/setup (23.05s)

                                                
                                    
x
+
TestErrorSpam/start (2.24s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 start --dry-run
--- PASS: TestErrorSpam/start (2.24s)

                                                
                                    
x
+
TestErrorSpam/status (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 status
--- PASS: TestErrorSpam/status (1.32s)

                                                
                                    
x
+
TestErrorSpam/pause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 pause
--- PASS: TestErrorSpam/pause (1.92s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (13.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 stop: (12.56432111s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220602101536-2113 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220602101536-2113 stop
--- PASS: TestErrorSpam/stop (13.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/test/nested/copy/2113/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (39.595482258s)
--- PASS: TestFunctional/serial/StartWithProxy (39.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --alsologtostderr -v=8: (6.229291266s)
functional_test.go:655: soft start took 6.229765871s for "functional-20220602101622-2113" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220602101622-2113 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220602101622-2113 get po -A: (1.472387249s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache add k8s.gcr.io/pause:3.1: (1.056787484s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache add k8s.gcr.io/pause:3.3: (1.69371604s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache add k8s.gcr.io/pause:latest: (1.578583722s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220602101622-2113 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local3181924046/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache add minikube-local-cache-test:functional-20220602101622-2113
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache add minikube-local-cache-test:functional-20220602101622-2113: (1.259393334s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache delete minikube-local-cache-test:functional-20220602101622-2113
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220602101622-2113
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (450.610477ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 cache reload: (1.064495987s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 kubectl -- --context functional-20220602101622-2113 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220602101622-2113 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.62s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.236039585s)
functional_test.go:753: restart took 33.23617648s for "functional-20220602101622-2113" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220602101622-2113 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 logs
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 logs: (3.204962121s)
--- PASS: TestFunctional/serial/LogsCmd (3.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1323946420/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1323946420/001/logs.txt: (3.267577621s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220602101622-2113 config get cpus: exit status 14 (51.746382ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220602101622-2113 config get cpus: exit status 14 (50.949906ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220602101622-2113 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220602101622-2113 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 3995: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (614.179969ms)

                                                
                                                
-- stdout --
	* [functional-20220602101622-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:18:59.440865    3949 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:18:59.441042    3949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:18:59.441048    3949 out.go:309] Setting ErrFile to fd 2...
	I0602 10:18:59.441052    3949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:18:59.441140    3949 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:18:59.441434    3949 out.go:303] Setting JSON to false
	I0602 10:18:59.456541    3949 start.go:115] hostinfo: {"hostname":"37309.local","uptime":1109,"bootTime":1654189230,"procs":342,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:18:59.456697    3949 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:18:59.479513    3949 out.go:177] * [functional-20220602101622-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	I0602 10:18:59.522631    3949 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 10:18:59.544328    3949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:18:59.567297    3949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:18:59.589145    3949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:18:59.610473    3949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 10:18:59.632733    3949 config.go:178] Loaded profile config "functional-20220602101622-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:18:59.633344    3949 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 10:18:59.705066    3949 docker.go:137] docker version: linux-20.10.14
	I0602 10:18:59.705223    3949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:18:59.830066    3949 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-02 17:18:59.764076863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:18:59.873894    3949 out.go:177] * Using the docker driver based on existing profile
	I0602 10:18:59.895629    3949 start.go:284] selected driver: docker
	I0602 10:18:59.895654    3949 start.go:806] validating driver "docker" against &{Name:functional-20220602101622-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602101622-2113 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:18:59.895823    3949 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 10:18:59.920645    3949 out.go:177] 
	W0602 10:18:59.942007    3949 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0602 10:18:59.963669    3949 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220602101622-2113 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (627.335796ms)

                                                
                                                
-- stdout --
	* [functional-20220602101622-2113] minikube v1.26.0-beta.1 sur Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:18:57.487456    3907 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:18:57.487597    3907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:18:57.487602    3907 out.go:309] Setting ErrFile to fd 2...
	I0602 10:18:57.487606    3907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:18:57.487714    3907 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:18:57.487938    3907 out.go:303] Setting JSON to false
	I0602 10:18:57.503467    3907 start.go:115] hostinfo: {"hostname":"37309.local","uptime":1107,"bootTime":1654189230,"procs":342,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0602 10:18:57.503553    3907 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 10:18:57.525594    3907 out.go:177] * [functional-20220602101622-2113] minikube v1.26.0-beta.1 sur Darwin 12.4
	I0602 10:18:57.568252    3907 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 10:18:57.589555    3907 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 10:18:57.611538    3907 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0602 10:18:57.633310    3907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 10:18:57.675105    3907 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 10:18:57.697082    3907 config.go:178] Loaded profile config "functional-20220602101622-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:18:57.697731    3907 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 10:18:57.770337    3907 docker.go:137] docker version: linux-20.10.14
	I0602 10:18:57.770472    3907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 10:18:57.895712    3907 info.go:265] docker info: {ID:C5NF:DVAK:4VSL:OFCK:WSCS:COTV:FLGY:HFH6:BUX6:EBAE:4VCN:IKAC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-02 17:18:57.848691724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 10:18:57.937634    3907 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0602 10:18:57.958585    3907 start.go:284] selected driver: docker
	I0602 10:18:57.958608    3907 start.go:806] validating driver "docker" against &{Name:functional-20220602101622-2113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602101622-2113 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 10:18:57.958784    3907 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 10:18:57.983961    3907 out.go:177] 
	W0602 10:18:58.006079    3907 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0602 10:18:58.027740    3907 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 status
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (16.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220602101622-2113 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220602101622-2113 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-4m9zs" [0dab1cf8-0e39-4f5b-ad98-f6919179b782] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-4m9zs" [0dab1cf8-0e39-4f5b-ad98-f6919179b782] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.008388015s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 service list
functional_test.go:1448: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 service list: (1.857447071s)
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 service --namespace=default --https --url hello-node
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 service --namespace=default --https --url hello-node: (2.022450151s)
functional_test.go:1475: found endpoint: https://127.0.0.1:52666
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 service hello-node --url --format={{.IP}}
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 service hello-node --url --format={{.IP}}: (2.024617093s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 service hello-node --url
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 service hello-node --url: (2.023715712s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:52738
--- PASS: TestFunctional/parallel/ServiceCmd (16.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [5dbe3672-c672-40c9-a4bf-aeb9e0a58c5f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013858723s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220602101622-2113 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220602101622-2113 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602101622-2113 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220602101622-2113 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [0d1c8c68-2930-4710-91e7-02322fd1aa79] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [0d1c8c68-2930-4710-91e7-02322fd1aa79] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [0d1c8c68-2930-4710-91e7-02322fd1aa79] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.008669965s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220602101622-2113 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220602101622-2113 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220602101622-2113 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [a7d18413-ef51-4b95-b05f-a0da442a5f84] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [a7d18413-ef51-4b95-b05f-a0da442a5f84] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [a7d18413-ef51-4b95-b05f-a0da442a5f84] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005922144s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220602101622-2113 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "echo hello"
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh -n functional-20220602101622-2113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 cp functional-20220602101622-2113:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd2241755055/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh -n functional-20220602101622-2113 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220602101622-2113 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-knwlb" [5f3b1f53-0fe1-4612-a3d6-793d442b259f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-knwlb" [5f3b1f53-0fe1-4612-a3d6-793d442b259f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.009008256s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602101622-2113 exec mysql-b87c45988-knwlb -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602101622-2113 exec mysql-b87c45988-knwlb -- mysql -ppassword -e "show databases;": exit status 1 (103.813754ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602101622-2113 exec mysql-b87c45988-knwlb -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602101622-2113 exec mysql-b87c45988-knwlb -- mysql -ppassword -e "show databases;": exit status 1 (102.311718ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602101622-2113 exec mysql-b87c45988-knwlb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/2113/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo cat /etc/test/nested/copy/2113/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/2113.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo cat /etc/ssl/certs/2113.pem"
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/2113.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo cat /usr/share/ca-certificates/2113.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/21132.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo cat /etc/ssl/certs/21132.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/21132.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo cat /usr/share/ca-certificates/21132.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220602101622-2113 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo systemctl is-active crio": exit status 1 (460.862595ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 version --short
--- PASS: TestFunctional/parallel/Version/short (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220602101622-2113
docker.io/kubernetesui/metrics-scraper:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-20220602101622-2113 | ffbdadfbadfb9 | 30B    |
| docker.io/kubernetesui/metrics-scraper      | <none>                         | 115053965e86b | 43.8MB |
| docker.io/library/nginx                     | latest                         | 0e901e68141fd | 142MB  |
| docker.io/kubernetesui/dashboard            | <none>                         | 1042d9e0d8fcc | 246MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                        | df7b72818ad2e | 125MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| gcr.io/google-containers/addon-resizer      | functional-20220602101622-2113 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7                            | 2a0961b7de03c | 462MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/kube-proxy                       | v1.23.6                        | 4c03754524064 | 112MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine                         | b1c3acb288825 | 23.4MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                        | 8fa62c12256df | 135MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                        | 595f327f224a4 | 53.5MB |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls --format json:
[{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"125000000"},{"id":"4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"112000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","
repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"53500000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220602101622-2113"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.
23.6"],"size":"135000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffbdadfbadfb95178d24a7a7c390351061ddeca3a6b74810f62a429462b145e9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220602101622-2113"],"size":"30"},{"id":"1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000
"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls --format yaml:
- id: 8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "135000000"
- id: df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "125000000"
- id: 595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "53500000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
size: "32900000"
- id: 2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "112000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: ffbdadfbadfb95178d24a7a7c390351061ddeca3a6b74810f62a429462b145e9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220602101622-2113
size: "30"
- id: 1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh pgrep buildkitd: exit status 1 (407.524015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image build -t localhost/my-image:functional-20220602101622-2113 testdata/build
E0602 10:19:12.584535    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:19:12.590464    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:19:12.600825    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:19:12.620965    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:19:12.661092    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:19:12.743270    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:19:12.905202    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 10:19:13.225503    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
2022/06/02 10:19:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image build -t localhost/my-image:functional-20220602101622-2113 testdata/build: (2.321689768s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image build -t localhost/my-image:functional-20220602101622-2113 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 90b803f3e568
Removing intermediate container 90b803f3e568
---> 85bd0dbea4e4
Step 3/3 : ADD content.txt /
---> 806706ed362c
Successfully built 806706ed362c
Successfully tagged localhost/my-image:functional-20220602101622-2113
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.783178614s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602101622-2113: (2.830223195s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602101622-2113

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602101622-2113: (2.037462141s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602101622-2113: (2.893944808s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image save gcr.io/google-containers/addon-resizer:functional-20220602101622-2113 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image save gcr.io/google-containers/addon-resizer:functional-20220602101622-2113 /Users/jenkins/workspace/addon-resizer-save.tar: (1.346010143s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image rm gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.561514158s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101622-2113 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220602101622-2113: (2.307855793s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220602101622-2113 docker-env) && out/minikube-darwin-amd64 status -p functional-20220602101622-2113"
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220602101622-2113 docker-env) && out/minikube-darwin-amd64 status -p functional-20220602101622-2113": (1.015826474s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220602101622-2113 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 update-context --alsologtostderr -v=2
E0602 10:19:13.865863    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220602101622-2113 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220602101622-2113 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [528efabb-3861-43e7-ae1d-1137c1c1d713] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [528efabb-3861-43e7-ae1d-1137c1c1d713] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx-svc" [528efabb-3861-43e7-ae1d-1137c1c1d713] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.008152086s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220602101622-2113 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3758037120/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1654190322156609000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3758037120/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1654190322156609000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3758037120/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1654190322156609000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3758037120/001/test-1654190322156609000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (417.636002ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  2 17:18 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  2 17:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  2 17:18 test-1654190322156609000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh cat /mount-9p/test-1654190322156609000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220602101622-2113 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [3eb21e99-74c4-4a90-9fcf-124c0e388ce0] Pending
helpers_test.go:342: "busybox-mount" [3eb21e99-74c4-4a90-9fcf-124c0e388ce0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [3eb21e99-74c4-4a90-9fcf-124c0e388ce0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:342: "busybox-mount" [3eb21e99-74c4-4a90-9fcf-124c0e388ce0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.010053049s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220602101622-2113 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220602101622-2113 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3758037120/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602101622-2113 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220602101622-2113 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 3698: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220602101622-2113 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1183594546/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (414.518662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220602101622-2113 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1183594546/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh "sudo umount -f /mount-9p": exit status 1 (406.478854ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220602101622-2113 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220602101622-2113 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1183594546/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "463.424151ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "71.679459ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1361: Took "486.837409ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1374: Took "113.619151ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220602101622-2113
E0602 10:19:15.148091    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220602101622-2113
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220602101622-2113
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220602102635-2113 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220602102635-2113 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (41.139682061s)
--- PASS: TestJSONOutput/start/Command (41.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220602102635-2113 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220602102635-2113 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220602102635-2113 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220602102635-2113 --output=json --user=testUser: (12.39027678s)
--- PASS: TestJSONOutput/stop/Command (12.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220602102732-2113 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220602102732-2113 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (327.214816ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9667dd34-af6b-425e-bd6b-c61637205061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220602102732-2113] minikube v1.26.0-beta.1 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"678485a7-2a1c-4aba-adf2-95644c80650e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14269"}}
	{"specversion":"1.0","id":"47b3e79d-7f8e-4f41-9e69-c3fd283fc6b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig"}}
	{"specversion":"1.0","id":"bef1542e-fd91-49d4-af38-a0e3e50168d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"f9ed0361-70fc-499b-a826-3410ded0b6f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6d38e4e6-ae53-4d2e-9c4d-dec2aee987ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube"}}
	{"specversion":"1.0","id":"c1da6d90-f278-4e5c-a825-0910ea43ad13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220602102732-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220602102732-2113
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220602102733-2113 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220602102733-2113 --network=: (24.299912055s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220602102733-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220602102733-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220602102733-2113: (2.710494979s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220602102800-2113 --network=bridge
E0602 10:28:01.281478    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220602102800-2113 --network=bridge: (22.075319148s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220602102800-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220602102800-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220602102800-2113: (2.540547374s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.68s)

                                                
                                    
x
+
TestKicExistingNetwork (26.97s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220602102825-2113 --network=existing-network
E0602 10:28:28.981758    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220602102825-2113 --network=existing-network: (23.785422825s)
helpers_test.go:175: Cleaning up "existing-network-20220602102825-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220602102825-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220602102825-2113: (2.763669791s)
--- PASS: TestKicExistingNetwork (26.97s)

                                                
                                    
x
+
TestKicCustomSubnet (25.28s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220602102852-2113 --subnet=192.168.60.0/24
E0602 10:29:12.586691    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220602102852-2113 --subnet=192.168.60.0/24: (22.464200573s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220602102852-2113 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220602102852-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220602102852-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220602102852-2113: (2.756003013s)
--- PASS: TestKicCustomSubnet (25.28s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (56.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220602102917-2113 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220602102917-2113 --driver=docker : (24.55533857s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220602102917-2113 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220602102917-2113 --driver=docker : (24.787346641s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220602102917-2113
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220602102917-2113
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220602102917-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220602102917-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220602102917-2113: (2.917178001s)
helpers_test.go:175: Cleaning up "first-20220602102917-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220602102917-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220602102917-2113: (2.726028807s)
--- PASS: TestMinikubeProfile (56.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220602103014-2113 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220602103014-2113 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.416124796s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220602103014-2113 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220602103014-2113 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220602103014-2113 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.102859632s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220602103014-2113 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.4s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220602103014-2113 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220602103014-2113 --alsologtostderr -v=5: (2.399143771s)
--- PASS: TestMountStart/serial/DeleteFirst (2.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220602103014-2113 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220602103014-2113
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220602103014-2113: (1.612971106s)
--- PASS: TestMountStart/serial/Stop (1.61s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (4.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220602103014-2113
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220602103014-2113: (3.991300255s)
--- PASS: TestMountStart/serial/RestartStopped (4.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220602103014-2113 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220602103042-2113 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220602103042-2113 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m20.295655391s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.713436359s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- rollout status deployment/busybox: (2.88452641s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-rsmt9 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-sb2hc -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-rsmt9 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-sb2hc -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-rsmt9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-sb2hc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-rsmt9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-rsmt9 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-sb2hc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220602103042-2113 -- exec busybox-7978565885-sb2hc -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220602103042-2113 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220602103042-2113 -v 3 --alsologtostderr: (24.546768863s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --alsologtostderr: (1.094961156s)
--- PASS: TestMultiNode/serial/AddNode (25.64s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.51s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --output json --alsologtostderr: (1.092061125s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp testdata/cp-test.txt multinode-20220602103042-2113:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp multinode-20220602103042-2113:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3114104742/001/cp-test_multinode-20220602103042-2113.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp multinode-20220602103042-2113:/home/docker/cp-test.txt multinode-20220602103042-2113-m02:/home/docker/cp-test_multinode-20220602103042-2113_multinode-20220602103042-2113-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m02 "sudo cat /home/docker/cp-test_multinode-20220602103042-2113_multinode-20220602103042-2113-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp multinode-20220602103042-2113:/home/docker/cp-test.txt multinode-20220602103042-2113-m03:/home/docker/cp-test_multinode-20220602103042-2113_multinode-20220602103042-2113-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m03 "sudo cat /home/docker/cp-test_multinode-20220602103042-2113_multinode-20220602103042-2113-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp testdata/cp-test.txt multinode-20220602103042-2113-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp multinode-20220602103042-2113-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3114104742/001/cp-test_multinode-20220602103042-2113-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp multinode-20220602103042-2113-m02:/home/docker/cp-test.txt multinode-20220602103042-2113:/home/docker/cp-test_multinode-20220602103042-2113-m02_multinode-20220602103042-2113.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113 "sudo cat /home/docker/cp-test_multinode-20220602103042-2113-m02_multinode-20220602103042-2113.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp multinode-20220602103042-2113-m02:/home/docker/cp-test.txt multinode-20220602103042-2113-m03:/home/docker/cp-test_multinode-20220602103042-2113-m02_multinode-20220602103042-2113-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m03 "sudo cat /home/docker/cp-test_multinode-20220602103042-2113-m02_multinode-20220602103042-2113-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp testdata/cp-test.txt multinode-20220602103042-2113-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp multinode-20220602103042-2113-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3114104742/001/cp-test_multinode-20220602103042-2113-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp multinode-20220602103042-2113-m03:/home/docker/cp-test.txt multinode-20220602103042-2113:/home/docker/cp-test_multinode-20220602103042-2113-m03_multinode-20220602103042-2113.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113 "sudo cat /home/docker/cp-test_multinode-20220602103042-2113-m03_multinode-20220602103042-2113.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 cp multinode-20220602103042-2113-m03:/home/docker/cp-test.txt multinode-20220602103042-2113-m02:/home/docker/cp-test_multinode-20220602103042-2113-m03_multinode-20220602103042-2113-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 ssh -n multinode-20220602103042-2113-m02 "sudo cat /home/docker/cp-test_multinode-20220602103042-2113-m03_multinode-20220602103042-2113-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 node stop m03
E0602 10:33:01.290978    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 node stop m03: (12.474205054s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status: exit status 7 (824.718866ms)

                                                
                                                
-- stdout --
	multinode-20220602103042-2113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220602103042-2113-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220602103042-2113-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --alsologtostderr: exit status 7 (875.349802ms)

                                                
                                                
-- stdout --
	multinode-20220602103042-2113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220602103042-2113-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220602103042-2113-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:33:06.027771    6830 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:33:06.027986    6830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:33:06.027994    6830 out.go:309] Setting ErrFile to fd 2...
	I0602 10:33:06.027998    6830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:33:06.028097    6830 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:33:06.028258    6830 out.go:303] Setting JSON to false
	I0602 10:33:06.028272    6830 mustload.go:65] Loading cluster: multinode-20220602103042-2113
	I0602 10:33:06.028544    6830 config.go:178] Loaded profile config "multinode-20220602103042-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:33:06.028553    6830 status.go:253] checking status of multinode-20220602103042-2113 ...
	I0602 10:33:06.028881    6830 cli_runner.go:164] Run: docker container inspect multinode-20220602103042-2113 --format={{.State.Status}}
	I0602 10:33:06.098146    6830 status.go:328] multinode-20220602103042-2113 host status = "Running" (err=<nil>)
	I0602 10:33:06.098185    6830 host.go:66] Checking if "multinode-20220602103042-2113" exists ...
	I0602 10:33:06.098481    6830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602103042-2113
	I0602 10:33:06.167685    6830 host.go:66] Checking if "multinode-20220602103042-2113" exists ...
	I0602 10:33:06.167955    6830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:33:06.168009    6830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602103042-2113
	I0602 10:33:06.237478    6830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55825 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602103042-2113/id_rsa Username:docker}
	I0602 10:33:06.321225    6830 ssh_runner.go:195] Run: systemctl --version
	I0602 10:33:06.325769    6830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 10:33:06.334741    6830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220602103042-2113
	I0602 10:33:06.405359    6830 kubeconfig.go:92] found "multinode-20220602103042-2113" server: "https://127.0.0.1:55829"
	I0602 10:33:06.405383    6830 api_server.go:165] Checking apiserver status ...
	I0602 10:33:06.405420    6830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 10:33:06.415203    6830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1571/cgroup
	W0602 10:33:06.423104    6830 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1571/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0602 10:33:06.423118    6830 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55829/healthz ...
	I0602 10:33:06.428601    6830 api_server.go:266] https://127.0.0.1:55829/healthz returned 200:
	ok
	I0602 10:33:06.428614    6830 status.go:419] multinode-20220602103042-2113 apiserver status = Running (err=<nil>)
	I0602 10:33:06.428621    6830 status.go:255] multinode-20220602103042-2113 status: &{Name:multinode-20220602103042-2113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0602 10:33:06.428634    6830 status.go:253] checking status of multinode-20220602103042-2113-m02 ...
	I0602 10:33:06.428861    6830 cli_runner.go:164] Run: docker container inspect multinode-20220602103042-2113-m02 --format={{.State.Status}}
	I0602 10:33:06.550085    6830 status.go:328] multinode-20220602103042-2113-m02 host status = "Running" (err=<nil>)
	I0602 10:33:06.550118    6830 host.go:66] Checking if "multinode-20220602103042-2113-m02" exists ...
	I0602 10:33:06.550560    6830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602103042-2113-m02
	I0602 10:33:06.621285    6830 host.go:66] Checking if "multinode-20220602103042-2113-m02" exists ...
	I0602 10:33:06.621602    6830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 10:33:06.621661    6830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602103042-2113-m02
	I0602 10:33:06.691409    6830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56018 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602103042-2113-m02/id_rsa Username:docker}
	I0602 10:33:06.774375    6830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 10:33:06.783606    6830 status.go:255] multinode-20220602103042-2113-m02 status: &{Name:multinode-20220602103042-2113-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0602 10:33:06.783632    6830 status.go:253] checking status of multinode-20220602103042-2113-m03 ...
	I0602 10:33:06.783872    6830 cli_runner.go:164] Run: docker container inspect multinode-20220602103042-2113-m03 --format={{.State.Status}}
	I0602 10:33:06.853476    6830 status.go:328] multinode-20220602103042-2113-m03 host status = "Stopped" (err=<nil>)
	I0602 10:33:06.853495    6830 status.go:341] host is not running, skipping remaining checks
	I0602 10:33:06.853502    6830 status.go:255] multinode-20220602103042-2113-m03 status: &{Name:multinode-20220602103042-2113-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 node start m03 --alsologtostderr: (24.037622996s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status: (1.154041392s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220602103042-2113
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220602103042-2113
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220602103042-2113: (37.040429468s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220602103042-2113 --wait=true -v=8 --alsologtostderr
E0602 10:34:12.597598    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220602103042-2113 --wait=true -v=8 --alsologtostderr: (1m18.960348286s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220602103042-2113
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (19.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 node delete m03
E0602 10:35:35.652346    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 node delete m03: (16.649370782s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.481694693s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (19.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 stop: (24.937415744s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status: exit status 7 (177.239691ms)

                                                
                                                
-- stdout --
	multinode-20220602103042-2113
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220602103042-2113-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --alsologtostderr: exit status 7 (176.844556ms)

                                                
                                                
-- stdout --
	multinode-20220602103042-2113
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220602103042-2113-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 10:36:12.506693    7382 out.go:296] Setting OutFile to fd 1 ...
	I0602 10:36:12.507035    7382 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:36:12.507041    7382 out.go:309] Setting ErrFile to fd 2...
	I0602 10:36:12.507045    7382 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 10:36:12.507138    7382 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 10:36:12.507304    7382 out.go:303] Setting JSON to false
	I0602 10:36:12.507320    7382 mustload.go:65] Loading cluster: multinode-20220602103042-2113
	I0602 10:36:12.507599    7382 config.go:178] Loaded profile config "multinode-20220602103042-2113": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 10:36:12.507610    7382 status.go:253] checking status of multinode-20220602103042-2113 ...
	I0602 10:36:12.507954    7382 cli_runner.go:164] Run: docker container inspect multinode-20220602103042-2113 --format={{.State.Status}}
	I0602 10:36:12.570359    7382 status.go:328] multinode-20220602103042-2113 host status = "Stopped" (err=<nil>)
	I0602 10:36:12.570384    7382 status.go:341] host is not running, skipping remaining checks
	I0602 10:36:12.570392    7382 status.go:255] multinode-20220602103042-2113 status: &{Name:multinode-20220602103042-2113 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0602 10:36:12.570425    7382 status.go:253] checking status of multinode-20220602103042-2113-m02 ...
	I0602 10:36:12.570700    7382 cli_runner.go:164] Run: docker container inspect multinode-20220602103042-2113-m02 --format={{.State.Status}}
	I0602 10:36:12.634036    7382 status.go:328] multinode-20220602103042-2113-m02 host status = "Stopped" (err=<nil>)
	I0602 10:36:12.634060    7382 status.go:341] host is not running, skipping remaining checks
	I0602 10:36:12.634079    7382 status.go:255] multinode-20220602103042-2113-m02 status: &{Name:multinode-20220602103042-2113-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220602103042-2113 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220602103042-2113 --wait=true -v=8 --alsologtostderr --driver=docker : (57.266899146s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220602103042-2113 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.507795715s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220602103042-2113
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220602103042-2113-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220602103042-2113-m02 --driver=docker : exit status 14 (366.215205ms)

                                                
                                                
-- stdout --
	* [multinode-20220602103042-2113-m02] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220602103042-2113-m02' is duplicated with machine name 'multinode-20220602103042-2113-m02' in profile 'multinode-20220602103042-2113'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220602103042-2113-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220602103042-2113-m03 --driver=docker : (23.610277837s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220602103042-2113
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220602103042-2113: exit status 80 (613.239872ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220602103042-2113
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220602103042-2113-m03 already exists in multinode-20220602103042-2113-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220602103042-2113-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220602103042-2113-m03: (2.928655997s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.57s)

                                                
                                    
x
+
TestScheduledStopUnix (97.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220602104208-2113 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220602104208-2113 --memory=2048 --driver=docker : (23.058633632s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220602104208-2113 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220602104208-2113 -n scheduled-stop-20220602104208-2113
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220602104208-2113 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220602104208-2113 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220602104208-2113 -n scheduled-stop-20220602104208-2113
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220602104208-2113
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220602104208-2113 --schedule 15s
E0602 10:43:01.292570    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220602104208-2113
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220602104208-2113: exit status 7 (115.972572ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220602104208-2113
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220602104208-2113 -n scheduled-stop-20220602104208-2113
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220602104208-2113 -n scheduled-stop-20220602104208-2113: exit status 7 (113.49602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220602104208-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220602104208-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220602104208-2113: (2.43481664s)
--- PASS: TestScheduledStopUnix (97.49s)

                                                
                                    
x
+
TestSkaffold (56.09s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3218473068 version
skaffold_test.go:63: skaffold version: v1.38.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220602104346-2113 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220602104346-2113 --memory=2600 --driver=docker : (23.159190576s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3218473068 run --minikube-profile skaffold-20220602104346-2113 --kube-context skaffold-20220602104346-2113 --status-check=true --port-forward=false --interactive=false
E0602 10:44:12.599555    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
skaffold_test.go:110: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3218473068 run --minikube-profile skaffold-20220602104346-2113 --kube-context skaffold-20220602104346-2113 --status-check=true --port-forward=false --interactive=false: (18.004211061s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-6c6c4f9965-dj2zc" [6ea547c5-5519-4086-9bb8-974f4868d11f] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012398925s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-77f86fd4c-gd62v" [72c14a96-37b0-4a2c-b838-0a1831e95365] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009216053s
helpers_test.go:175: Cleaning up "skaffold-20220602104346-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220602104346-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220602104346-2113: (3.053194836s)
--- PASS: TestSkaffold (56.09s)

                                                
                                    
x
+
TestInsufficientStorage (12.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220602104442-2113 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220602104442-2113 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.571214785s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73fb1e2a-f566-43d5-af16-88614f591b04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220602104442-2113] minikube v1.26.0-beta.1 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"919abdb0-679e-419f-9da5-314ec9f82044","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14269"}}
	{"specversion":"1.0","id":"20d6018d-6478-4f2a-be18-0f7b3863a32b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig"}}
	{"specversion":"1.0","id":"7fb7e2a0-b524-4fd9-88d8-ef9ddf18dfeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"d5bff329-d3a6-46eb-985a-33abd298a04b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e3dff788-b23e-4e6f-b157-4a1b13b9a4c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube"}}
	{"specversion":"1.0","id":"c187e3f9-5a50-4a6d-a5cf-1cb5c6abf2e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"482cfd4a-b050-4126-9e93-b45cea2431f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7db004ca-1e7b-4f09-80e4-ef426b1535c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f00f09d-03a2-4ac1-b695-23ba62e1622a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"b37105e4-6d4c-445e-ab6e-4fd3b265d549","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220602104442-2113 in cluster insufficient-storage-20220602104442-2113","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"12057352-3ec6-408a-8a18-eb9a7bad5b9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d486f805-48aa-4b6f-a759-932cca7b57aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"10614ba5-30b3-4068-af0b-619f63ddbc3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220602104442-2113 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220602104442-2113 --output=json --layout=cluster: exit status 7 (418.055588ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220602104442-2113","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220602104442-2113","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 10:44:52.084813    8585 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220602104442-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220602104442-2113 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220602104442-2113 --output=json --layout=cluster: exit status 7 (416.594396ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220602104442-2113","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220602104442-2113","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 10:44:52.502224    8595 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220602104442-2113" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	E0602 10:44:52.510661    8595 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/insufficient-storage-20220602104442-2113/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220602104442-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220602104442-2113
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220602104442-2113: (2.50681865s)
--- PASS: TestInsufficientStorage (12.91s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.22s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.0-beta.1 on darwin
- MINIKUBE_LOCATION=14269
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3917061860/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3917061860/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3917061860/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3917061860/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.22s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.0-beta.1 on darwin
- MINIKUBE_LOCATION=14269
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3043792597/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3043792597/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3043792597/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3043792597/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220602104942-2113
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220602104942-2113: (3.659591373s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.66s)

                                                
                                    
x
+
TestPause/serial/Start (39.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220602105035-2113 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0602 10:50:50.962270    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220602105035-2113 --memory=2048 --install-addons=false --wait=all --driver=docker : (39.150846685s)
--- PASS: TestPause/serial/Start (39.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220602105035-2113 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220602105035-2113 --alsologtostderr -v=1 --driver=docker : (6.615057816s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.63s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220602105035-2113 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (365.447079ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220602105227-2113] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14269
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --driver=docker : (25.013812224s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220602105227-2113 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --no-kubernetes --driver=docker 
E0602 10:53:01.294618    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --no-kubernetes --driver=docker : (15.06694707s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220602105227-2113 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220602105227-2113 status -o json: exit status 2 (558.492468ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220602105227-2113","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220602105227-2113

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220602105227-2113: (3.288719492s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --no-kubernetes --driver=docker : (6.517761915s)
--- PASS: TestNoKubernetes/serial/Start (6.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220602105227-2113 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220602105227-2113 "sudo systemctl is-active --quiet service kubelet": exit status 1 (441.879657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (3.834133784s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220602105227-2113

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220602105227-2113: (1.757325231s)
--- PASS: TestNoKubernetes/serial/Stop (1.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220602105227-2113 --driver=docker : (6.152216803s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220602105227-2113 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220602105227-2113 "sudo systemctl is-active --quiet service kubelet": exit status 1 (424.336946ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (43.107399708s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
E0602 10:54:12.602185    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (48.776591967s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220602104455-2113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml: (1.896609086s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-spl4k" [92539672-b7fb-4744-8aeb-0539af5a746b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-668db85669-spl4k" [92539672-b7fb-4744-8aeb-0539af5a746b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007887531s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-m5lf8" [ef8716b4-63db-45e0-87f4-d43b135df517] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.015840326s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220602104455-2113 "pgrep -a kubelet"
E0602 10:54:29.029459    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context kindnet-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml: (1.613560109s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-brrdq" [1bd61ab4-5f51-494c-a5e0-c5a8f72fb003] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-brrdq" [1bd61ab4-5f51-494c-a5e0-c5a8f72fb003] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006067636s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602104455-2113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.111524734s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220602104455-2113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220602104456-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0602 10:54:56.724191    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220602104456-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (1m7.074920405s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-2n2zv" [f5dd9f8a-dc40-421d-9da9-247a41370bb9] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016526468s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220602104456-2113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220602104456-2113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context calico-20220602104456-2113 replace --force -f testdata/netcat-deployment.yaml: (1.678218087s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-48w5l" [fbebd285-83c1-4da1-8f14-0a5ef5f9e52a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-48w5l" [fbebd285-83c1-4da1-8f14-0a5ef5f9e52a] Running
E0602 10:56:04.359683    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.009894936s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602104456-2113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220602104456-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220602104456-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (40.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (40.265116254s)
--- PASS: TestNetworkPlugins/group/false/Start (40.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220602104455-2113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml: (1.633662174s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-57lmm" [1e51d13d-0424-4355-929a-41b233378070] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-57lmm" [1e51d13d-0424-4355-929a-41b233378070] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.008115548s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602104455-2113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.113519582s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (40.363363542s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220602104455-2113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context bridge-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml: (1.644380498s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-c4ttp" [c4d46f16-8fef-4567-8e02-ab7bb91a8a60] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-c4ttp" [c4d46f16-8fef-4567-8e02-ab7bb91a8a60] Running
E0602 10:58:01.297443    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.059451545s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602104455-2113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (42.942917319s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (39.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220602104455-2113 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (39.692044342s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (39.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220602104455-2113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml: (1.927374783s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-n5xhk" [0576f96b-4c01-4e16-89e7-faa8274e9277] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-n5xhk" [0576f96b-4c01-4e16-89e7-faa8274e9277] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.008371396s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220602104455-2113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context kubenet-20220602104455-2113 replace --force -f testdata/netcat-deployment.yaml: (1.635138594s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-2z5j7" [07b6558e-4b12-4f5c-a646-e22059273529] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-668db85669-2z5j7" [07b6558e-4b12-4f5c-a646-e22059273529] Running
E0602 10:59:12.602366    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.007453826s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220602104455-2113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602104455-2113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220602104455-2113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220602105919-2113 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6
E0602 10:59:19.970932    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:19.976469    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:19.987330    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:20.009403    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:20.049588    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:20.130424    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:20.292593    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:20.614333    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:21.254495    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:22.535032    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:23.887507    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:23.892572    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:23.902659    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:23.923809    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:23.963914    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:24.045479    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:24.205685    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:24.526124    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:25.097160    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:25.166755    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:26.446922    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:29.007178    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:29.030086    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 10:59:30.217330    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:34.127313    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:40.457635    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 10:59:44.368315    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 11:00:00.938268    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 11:00:04.850474    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220602105919-2113 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6: (50.399524999s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220602105919-2113 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context no-preload-20220602105919-2113 create -f testdata/busybox.yaml: (1.593501905s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [cd6f57bd-9435-406b-90fd-f1e84db181cf] Pending
helpers_test.go:342: "busybox" [cd6f57bd-9435-406b-90fd-f1e84db181cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [cd6f57bd-9435-406b-90fd-f1e84db181cf] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.013714829s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220602105919-2113 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220602105919-2113 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220602105919-2113 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220602105919-2113 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220602105919-2113 --alsologtostderr -v=3: (12.522913432s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113: exit status 7 (119.032539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220602105919-2113 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (337.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220602105919-2113 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6
E0602 11:00:41.898612    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 11:00:45.811003    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 11:00:52.181858    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:52.187043    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:52.197119    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:52.217685    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:52.258251    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:52.338472    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:52.498720    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:52.820951    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:53.461103    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:54.741458    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:00:57.301704    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:01:02.422151    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:01:12.741956    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:01:33.222909    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:01:54.172737    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:54.179019    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:54.189525    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:54.209865    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:54.250856    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:54.333038    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:54.495214    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:54.816655    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:55.456897    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:56.737118    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:01:59.297548    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:03.898780    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:04.419387    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:07.810680    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:14.184026    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:02:14.660059    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:35.142729    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:54.828696    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:54.835128    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:54.846222    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:54.868459    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:54.909431    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:54.989538    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:55.151704    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:55.471848    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:56.112055    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:57.392350    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:02:59.952552    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:01.378457    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 11:03:05.073395    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:15.315837    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:03:16.105032    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220602105919-2113 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6: (5m37.031418533s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220602105919-2113 -n no-preload-20220602105919-2113
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (337.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220602105906-2113 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220602105906-2113 --alsologtostderr -v=3: (1.616206647s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220602105906-2113 -n old-k8s-version-20220602105906-2113: exit status 7 (131.822582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220602105906-2113 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-mzc2x" [937d38bc-b2d7-4a95-ad97-cb199dfd5ef8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-mzc2x" [937d38bc-b2d7-4a95-ad97-cb199dfd5ef8] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.015000041s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-mzc2x" [937d38bc-b2d7-4a95-ad97-cb199dfd5ef8] Running
E0602 11:06:19.948758    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008531827s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220602105919-2113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context no-preload-20220602105919-2113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.570135506s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220602105919-2113 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (39.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220602110711-2113 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6
E0602 11:07:21.872225    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220602110711-2113 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6: (39.817731273s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (39.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220602110711-2113 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context default-k8s-different-port-20220602110711-2113 create -f testdata/busybox.yaml: (1.552171489s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [edb0b27f-6ea2-46e9-9cd5-3373e8caa7a5] Pending
helpers_test.go:342: "busybox" [edb0b27f-6ea2-46e9-9cd5-3373e8caa7a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0602 11:07:54.831830    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
helpers_test.go:342: "busybox" [edb0b27f-6ea2-46e9-9cd5-3373e8caa7a5] Running
E0602 11:08:01.381678    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.014887956s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220602110711-2113 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220602110711-2113 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220602110711-2113 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220602110711-2113 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220602110711-2113 --alsologtostderr -v=3: (12.580204681s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113: exit status 7 (118.714221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220602110711-2113 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (326.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220602110711-2113 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6
E0602 11:08:22.526048    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:08:54.111728    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:08:55.745327    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 11:09:03.534830    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:09:12.689432    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602101224-2113/client.crt: no such file or directory
E0602 11:09:20.058243    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory
E0602 11:09:21.802158    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory
E0602 11:09:23.975213    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602104455-2113/client.crt: no such file or directory
E0602 11:09:29.118446    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602104346-2113/client.crt: no such file or directory
E0602 11:09:31.232694    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kubenet-20220602104455-2113/client.crt: no such file or directory
E0602 11:10:11.561706    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:11.567611    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:11.577917    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:11.600080    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:11.695830    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:11.777073    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:11.937612    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:12.259610    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:12.899973    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:14.180111    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:16.742385    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:21.863951    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:32.104366    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:10:52.271691    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602104456-2113/client.crt: no such file or directory
E0602 11:10:52.585108    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:11:33.548044    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:11:54.183087    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/false-20220602104455-2113/client.crt: no such file or directory
E0602 11:12:44.453394    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101622-2113/client.crt: no such file or directory
E0602 11:12:54.837867    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:12:55.469984    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220602110711-2113 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6: (5m26.340580136s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602110711-2113 -n default-k8s-different-port-20220602110711-2113
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (326.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-hqkxc" [2b45dde7-82b4-439a-b822-381b15db860e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-hqkxc" [2b45dde7-82b4-439a-b822-381b15db860e] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.013865926s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-hqkxc" [2b45dde7-82b4-439a-b822-381b15db860e] Running
E0602 11:13:54.116468    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602104455-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006925907s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220602110711-2113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context default-k8s-different-port-20220602110711-2113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.584011354s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220602110711-2113 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220602111446-2113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220602111446-2113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6: (36.980067496s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220602111446-2113 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220602111446-2113 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220602111446-2113 --alsologtostderr -v=3: (12.558954546s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113: exit status 7 (119.077165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220602111446-2113 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220602111446-2113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6
E0602 11:15:39.313119    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602105919-2113/client.crt: no such file or directory
E0602 11:15:43.118727    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602104455-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220602111446-2113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6: (17.044994027s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220602111446-2113 -n newest-cni-20220602111446-2113
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220602111446-2113 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220602111648-2113 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220602111648-2113 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6: (41.362056441s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220602111648-2113 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context embed-certs-20220602111648-2113 create -f testdata/busybox.yaml: (1.605919806s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [d7c0bbc6-d0e0-4106-803c-c7786c58c4c8] Pending
helpers_test.go:342: "busybox" [d7c0bbc6-d0e0-4106-803c-c7786c58c4c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [d7c0bbc6-d0e0-4106-803c-c7786c58c4c8] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013166913s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220602111648-2113 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220602111648-2113 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220602111648-2113 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220602111648-2113 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220602111648-2113 --alsologtostderr -v=3: (12.557974452s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113
E0602 11:17:53.994507    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113: exit status 7 (117.215473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220602111648-2113 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (334.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220602111648-2113 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6
E0602 11:17:54.842280    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/bridge-20220602104455-2113/client.crt: no such file or directory
E0602 11:17:55.275659    2113 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14269-960-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602110711-2113/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220602111648-2113 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6: (5m34.077765078s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220602111648-2113 -n embed-certs-20220602111648-2113
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (334.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-gg4gx" [a8426b52-aeeb-4e11-8366-7cbf31b79047] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-gg4gx" [a8426b52-aeeb-4e11-8366-7cbf31b79047] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.013296709s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-gg4gx" [a8426b52-aeeb-4e11-8366-7cbf31b79047] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007592044s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220602111648-2113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context embed-certs-20220602111648-2113 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.573348226s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220602111648-2113 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.50s)

                                                
                                    

Test skip (18/282)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 10.822756ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-4hnr2" [dcb26faa-d7ab-42ec-a30e-69422a909208] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010922547s
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-bjpp8" [c4cec242-5ecf-48fd-8225-596852169b0a] Running
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007256568s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220602101224-2113 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220602101224-2113 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220602101224-2113 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.282357637s)
addons_test.go:305: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.36s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220602101224-2113 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220602101224-2113 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220602101224-2113 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [a09f2a57-7532-4914-b5b4-e094b7cb85fd] Pending
helpers_test.go:342: "nginx" [a09f2a57-7532-4914-b5b4-e094b7cb85fd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [a09f2a57-7532-4914-b5b4-e094b7cb85fd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007128768s
addons_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220602101224-2113 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.54s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220602101622-2113 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220602101622-2113 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-2ngbb" [dde2c6fc-2418-4c4c-820c-0bb660dd105d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-2ngbb" [dde2c6fc-2418-4c4c-820c-0bb660dd105d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.007176922s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220602104455-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220602104455-2113
--- SKIP: TestNetworkPlugins/group/flannel (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220602104455-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220602104455-2113
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.56s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220602105918-2113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220602105918-2113
--- SKIP: TestStartStop/group/disable-driver-mounts (0.63s)

                                                
                                    
Copied to clipboard